var/home/core/zuul-output/0000755000175000017500000000000015140626063014530 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015140637676015510 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000324311315140637545020267 0ustar corecoree?ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB > "mv?_eGbuuțx{w7ݭ7֫u% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{3CF*A(-aD~JwFPO7M$n6iXύO^%26lDt#3{f!f6;WR.!$5 J:1*S%V!F([EbD]娍ԹiE03`Cfw&:ɴ@=yN{f}\{+>2^G) u.`l(Sm&F4a0>eBmFR5]!PI6f٘"y/(":[#;`1}+7 s'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~BŊ6EZ|^߸3%L[EC 7gg/碓@e=Vn)h\\lwCzDiQJxTsL] ,=M`nͷ~Vܯ5n|X&pNz7l9HGAr Mme)M,O!Xa~YB ɻ!@J$ty#&i 5ܘ=ЂK]IIɻ]rwbXh)g''H_`!GKF5/O]Zڢ>:O񨡺ePӋ&56zGnL!?lJJYq=Wo/"IyQ4\:y|6h6dQX0>HTG5QOuxMe 1׶/5άRIo>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay V ̍"ޛ4tO,{=hFѓ$b =DߢeMf!.c~J~]O-h s;ӜigBhD&" מRҥ ˷(mnԕi;}#nzM NN .Gܼ8 Gx.b;NH g#+@yWNKyL~El~vm9ZIi$2?$>Q IJipqc2*;#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIlؔ?v\<uV?.*E!qQ5m㎤9I͸,0E.ŊygcEl#L)(g4^atNbe7}v+7Zo>W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀ "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!IMo:^G10EY↘H:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Yݷ/7cs=v{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb { O݂9/{&Ά+4*Iqt~L4Ykja?BH{u:%lXZlvwohbL_#ǂsr_d >04SRm+0^PTi-"] O('@BKD6 {NmʐzRj.aQcb^CZ-uvpr CѐٱlGNzIveca=%1Qi F>wTLHUGӃ\sA֎Xpljlv ^tSȻ \cPwίwX"{>9V0ټ_`#U8VdTtD_GU9V ұ{q:ObUi7s )B ۊZlzIA4S#x,T3ѱ ԶJ=rs>Nb: Q6ˌ߉J%.Dl2ȱ%ܱ&6XƟ6qg(USok+Po$lwvmi8W_VT18V =| ub6QWCnY'"*aN08wuSEAVخ m3 o\` sHc# fqT .,ŀU|⦍߶/*~48âF,#[:y_YIpʼn)dk!J'Z5=r&; (y*b*O_ULT.ÔD[%s1,jЅ@k0Ցu֯dtKl$Y5O*GUڇvI`b0ο0~oI`b#FOf_$0!i rS/wvҍ%Eb/Ec|U9F-)L)ŘF`U:VK jeFrԋ7EDYpԽ.D\dNyj荊EEg]bÔF˩ք%EGƶ*NX)Hc(<|q@Oޯr^3>Uf1w;mCja:-1_k٘%VbZ˙#G6 `q+MPU~l!.?I_Pĝ"] rT [eTr؟˰ ]\ h! v˱>5S1px fnk}sRmA>d2UAkؖvlX܇Bz1U_#Xӫ+al H d\k/I,k,ρ|`zR/$@8VU^rcG"E7\qtS:ڝUyy >Vc11*?xYa8U`Jw/AcL~|;yj8TR#s"Q.ϊ/Yrx+u6*27fǪC%+A~*Zآ'ѭnۡ|< a1s\ T5҃FZh?EV"sd!@БU ^p%pO3|B5=2怕nwRqR9~ i±za+HFNi>. EWz:V^&YEs5Ȭ N *7{!fRБBSۘ† Er/IGU}APQT]|XN X]FbKjKdO U6[3TTX)|*H'2U0:VunBl  `5/@ա06VNO8VGON@KgjyK?Wq1egI+ I.*F~L!Gf"LD&U 6tGd#fR*c ^tSLjnKS9 Ȼ \ >lr&}+̼d"I va,Jm_u)d靕َ| Vw85F3Liƙb<;dM-})C?Fw*IJ_3UG'+¨[9| >80\+ xJpΕ`p~mg˗%F Rg(6=/r+%a>w Ohght uЍaRs ^d6GXAf?V_mW puȇ S:tŴvŀU#-*mZ5k5r)_x*8ͼx@(k:_TX%[paRu~}#Ѥr %A%`;MxB[CzR怕#H% }8@*AM.SEhd,rKrʇ)br\+! s1CtӒNc_:F*`Nv;ogQFa2V%ZniE|nZ&-I,t*ώlo Lhnٓ'Xm R ˍ-~ά}hs\5TT%~am.>!LcoJrKmqvez܅E9t6FZXgsreHhlٷ+ [}r:̓?W~e6>0E8`Jq-(ed;W¨:Ä&]䒿e;0:|$Ȃ1L-%;Ƅ{dɱL;V[bp>!n&աIJX1$9;[?- й vRCxKVV+#lj@_RL;IQ8ŢΌXD@Z< (1ZRÜ:OUM/vư{'jYXE4S/8 7: `/ +G\ U>]B2/n2=8) B gJ3bcKo̹ʇ\B~Is 2sO/I!}xV&\b<9$4Nve^آ]$LGF@LjKٕyzH 31Հm-XıUXF|\A-2) ' RG6h?āUŔyj[j_ӂ~ яA弆^bDyzǖQ8`jXbsK?l58,?YP5䜭ve9YFznTEf3Ja\,@2,?WYؾNr<V` =V[oB5!Z\ļǪЎr8@*ucѡv\[|s L-+y{5K@dzp`r"mũɸHNd"yc Pu>x2;W`_VR<aӗ&D<=h-Rר|/r _ǖھcߖ]G@Ն;UQG1 '3Jە Q88ASUȿ!:WѥLf21;d9OU᧯MR3V:<}xXh//T+coY5Ȧ4/m0NE(G2[+G~H'5ipӘ͏O +Px SPp.,?Uv|$8?md+./[R'$p6:[[h4u+zdc0ƭ$Ξ:===頉|$i*γ=O6Ⱦ*L&9H6E)}U|I5Ea#W(eW|GEWnWھ)WEd"{k=*Fˠ)z}ܙʺg<*_jDUajOo"kV T콵`_b5y N{O b_Fbb_UaLyV$dPdaw d_ͯ Az>OKl8^2=jiB=By지}\7}s*x p^ ;:%eXvxJF蔔?9u~dri|1gs3p^_Kc/.L[/+Jeq̊\ݟp ^Wͤeq%`~cmP|f`y蘎VE& B_[kdE+xG>3]p Ge>=oS+aXy?_*&,/= rBX?I7p ^u"8 ;IÔ/H'ǢAh$~S)!w%{Izv !>ԣ[ |(Lu׬@p]68pGxb24]:Z]o2BW·ܟxl}u;icSgQLV?:eׂ§X{ cX \oU븦K &{"kW.#;pLpm6~+R+G_zc-R|Y~V;;ludr{B^5_'ySXjGĉ ^`R~(cAhXNXLl@%"*nl7Q*WQX(~g5|=^.VyP,fY<Ӗ<#Zab1VH7|#̽f~2M'i y g]4|=(ewGQ&*\dY 2K^`<2O ?{X祼!Xu_kxOGbM+AyTj2 <E)Q( `2|5,F/y UY$ >9Bc>gr'PX2a `.꺨^OnjRꪚk- ֺb/y:Ey9Ja-FqO-p|el IפT׉6eП {fp`־UQ=%ǣqKs(!EM9x[k55@: /hh Jqʧa\?%`Nۿ?e̓Owy Rxͨ#OHȣ/>n}ݱF~>kGO)Adxp 8%n?Ƞ=K#X奈yRK9I;TtG}˝9$GmŠ#m Sq6=Z~>GIބ' O.j.<=A-7vny=j2(Y맬#=B'qU6*9G$,5l :^{sPm `G秧89INߜMĶt9$0s|z!@)k8u 8Z/2GO&sչm:uԃ)Z/y"<Ľl:]鬮>vom$qsw{(wg4#fޖyS\d߽&AVmlg^(lfܷY%#C1[-7,&9 ;^-k8O]K$۷N.$#Kz,wd4w6$ɇonOmjj{۫Q烦M1ݙ:YۢGYFI4sLUy{^~˺I}/Nup$_ |Ca+w MOae~:nG%}j O_РRyVK3(HԳWøESOH(AK; %qׇLXODOՃ+i2Aē(h6#+W#~9 8<8J-0_ӈslV6Y=&Ǥ'Zö`RU\aSw2)y2IPu1(U F$Y&g44bWeø_ݶiMSlKC<<辌h`FDKX)؄ToѾǁ*` NiT\`L-UFw6UaTZ҅ Zѽ7$8*k7"@Qlwt>KE0K! b6˃Yt'hruДG;j_ h\܄^o Q(#$!>CjXʀ?@46$Ԟp%Uޔz);O-0ܔRKUqwrfcƆ0*φ bFT&5 &jX,fAdJk4K `TI;`79☞v8s@s6ʖ|{L_Ұ E0}UJeM`MJ\bYTJ=uhp7k:q Djki\KLAS |bQUsTL!&WoЭPZEе",.~vAS}@)"Ep$Y5 3U[e ĺ&ȥ! h[58JjӬ$*'7W$T2 7 TCDq΍ G%69.l,Y"-a옾d?!k \Q]XLMҵn<4?_/:DtF]iT.eK"檚oEC$ِXSɩ*MeB[V3w} _KVKUUy}`ëK]Eb L=ݮqx@l<j}K^U} cWʾU+рژ@Z3"oCeS%bX "wU4d M TGwo3Yܪ{`YVu1#m==W5IPα  ,4E0=}74=Un*L% R`κ=$֤Y3|U,X-2q'GBh7k:yQ ;4t=}%Ley[=4m,~ջilذ,lT4:[2e6 +QZĭgD=0 & J+R9TH&ohWr1TUu^I>] }A1W$_[ViIy:g({WZK>rSUi1G}5ѻrqW&8MU豵nƧ;?Ervoݘ*iҴ?WqAcx?Eq@`1?;l`w]y^_[?nj#3Aؠocԋ〈0Bkй:0@k1wr #Xaܕk`H!,x'0Ox < ` 0#apà_3. #d_D(< uÇ40;6 n(ICG$.n<ރN(ݘ l"Z02"N&(s`g {w>@;ԟw0bZLF앎!yE-{t=J9YR]P'}g6 ѶZ?ixh4|GweP835! eսLxp$(}tǮB Hx~;tC$ ) x{FJ(Pd4w,jg!h У$xh ]o8Wzr4}lM/il@D[[Iv\fHIIЩö-op需?˵ƛ+W;Zc(o޿=jw Eik?A@GPv:qy@Lj-:ّҜϮ@ۊpΊP縋q[v(E+ 8T{Mu^n)CVk^h$\kymP=zͽ|J^a8:yMuExq< DdG1(Vգ&ȫtxI?quMGc0:{?IZ=Sw6R9 e1[S *g˸xWl4sgg{:W)˃H@]y8ʪߞ>P`l{?"ܨzTNo\XD*l&`B( }?ՋqD :!AHvQ_3{o!vh0z{ޙ{:4<d[F0v&v7 %X{ xpZEDqZE(n~G;p8#K^[~m[u@(VWyLg e݉+A8]cbx7ӵḶ "&`jAdؼ9'bY'vm@=dv#q;?@ģ;ڥXdu# ]0N噀:r"P蜫;?$Fnws4LmZ@V''kaNn5B@Py\k׾F@#2B񺑿 g`D# aG<3;N9NŰ"v›4,ϐ .E! Cq*&HLb:v]K$4TP7E(#FD(?խPJNSAo(SKsjn0bɎc0  }k}j~0VWrp*tp-oQS>8Ů+}w/PHrn t% x?M.ʷ q޵.gّ» XrF'#|JW8cP;ZӺ_~(fpCmu6ɷƅSIL5#5,0RE<䁜 s l%6pC S 7@6u*}Ao(A< sZ352 ڠӖ̀*@4yUbxT\,0GnXRPMh0H TVU55?d\My@wSմo F9hWsM.OO"P<4 wd}Lpe>y?m<*@( ػ?gpt( G.o_,A J1qxzD))V LЈmN:͆ y){DҊ@ y*.U!e-P G`xlKY/5:ZKQI7cGA ސva C;fzpYeLѣ`z,Lpgasf/!4axd3y44Сn"],l \ѯ_M yrů/eoI'cP |T!X"7ŚPca*u=שX_XZD],0ÅebCoŀRRAŪR3=PAVYRcYΕ"(Hj,0iNZ&m-2ٷ@n/ɻA-'W%,n^:|0] & sLMLf1p-\^ tW?R> q6E/ p>A9H. E52QpEah[ XVj7iGWڢ1 4,I* +rAå³Kӵ>g+֖<d+۔ }o7eil$_ ro!Wu٫tVyxN!aJgUɇ,v5ºJ-U# 7`ߑ9kd&u<)4(fB]IO>{fET=y5(`;u9T,r_"u7EB O#>P K]J7SNOUnrnԳڧ;weԲ eʞF*{$lB"|BI(߀PH؀P9iG*6 Z$ڀPksBj=PkBEB 7'~# 7 Y$ـPgsBCxe==RRW3Cr:K!`>2[> H)0OM(XWF9-ݭW:9:QV1C劇 yg-6kl<^n b9*E0 -8{\0+O\p,1!=0Qo,TT{ݷqY~ RG0I 5 dp~ -@9c).'P`Iz 2V$ruk#0Bg[e“! j= gr.=f}kC!HSrh{b\,7'g@RbgSti aQMW=!n ;kYR$Q6$ لٍhr#3h ˬZ{{_JU:gUm*W?,Ψ(OrE.7yS$ORH.p[xRثr.ɃBS(>:Jka,{ǶP_&+m'U@% ӾqM4Q.?i+/& Gb]e'&iO>,:bj :-G@q0E#f")oPjQwv.- L2|$Y0b D2B?D=@8Or|j| q#0}f s4N/*CQB\&O@UH 91QflD ́j ="5撽'IzYK>hD RLQ9ndIo ]E] Uq[qE<KY|gRyW A5̏r0H: z@d#d? W0Y U?/cԗ-B45ux*ZBwɸP5M2Ձ5rYk Mh.B=F$R,M$Yklk'rzi/Ak /a_٠BZzbzHNJyˋ/.$HzH<[J.l(\ ".@6aAG1v* b[bڌAqvj_حd khUR8_'se8/fƖ<}rYeҒˤ4e%df*ܡr 'txdHϱ^rqy۞t#7Gbɸr9]t-_H|$VnegLPRaF+J򜪒Zk6ߔ9ˊ(I%ݭ;mr8=@u<f(n5U$wcDn7׎TF]NC֬6fQ<]VHAUS=su:xۜ~z~'xkJtI yuTFrnN(Rm4ouF%5RN+5"QHpё6 nX&h8΁O^ G$ZɑȆ4EcE,{/^1I@=Y1SU Gd] 5[Y#3MU4H5뫏xw'i]Walyo.%56+ JX+{ǘ.2n.4{g_2E};&yozj3JkmH%szKy] @_%Zo!)3N9Lu(@ز]:cu+1=x:]jX?-!-?;0ޣK O쇿OzeOGW='/?;@x[/-_OR) j 8{)Ɔ@>:GtGwٔ™vBvh {6\PF+S6J>ifθ2doS૙OI,=u-hxE_d~ prVo\q&,+#O,ƥd3O:Hbic"8nlWÁY$y2NXʴz/^%޾k%F;^8~iޓjHդ3&ekXrֳtRQW5X:OӉ0X3Z»x.!%)xl1𖑇#镲9`g_,ճd*Zp=!Hby?p}9XC*djT&ermA^sf%΋*h(Vy²PuaaMQ(TA$J'&豦7Ī6RdtbV[*DpIG|$^آ>JƮlb%% wB.KovXCS.uc!z=]pҪ_jN 1.|]yo$8΋^Rr:'`I μ >$U^# {#IkT< #8Lw 烧ΤlW0^1e>$UA6#$]\W6,A<8U؆XU' 7/G]xD4.;1IGz}H i==%fl5<Ԡs<[tf)k|4=F'1bͳN,c C*bTb)tOYn=6/_sU&EL˧2-AHbȭ@xz[Ԕ91y.S#g MDz:i);݈3r/@rppCgYk;2lTG{p}&qpk`Ҁԗ:}d@-pAoթ뉣=z6?lv$8:ѱY3=ǒO4&fjV:Q( .P 1bX=8/6<$ kmAEEch,`XZY'H}"q0e%jV/(uA`I>z9)3k$fUQjVKz\$X""uQ RCt@}߾\Y~Ym m1>yZdP \j6h&xէ6}>|u8#rEVɴcAWHdƑyM Ig~)ƷvQ(Wm>_/:+ø9O_J7I vMWgu׫RihqYxݮ88.yELӠGt*=%ovI$aWHQ'(O oZTk_ 'I4&QRh \Vy^U9σx>@ s,1^[sI٭II$r(Jx QtNy=a^V4$(9+uDdRO(iLbYs9[v)m';BfA4#h si h"&FJ˝Ae0:pMtNAะƛoa.@:l6`^_nʝ!q0 ̟a B=zElw$p'SmϻÁWD?2u=89scl?_+0xnrޚ@Z%vᆓLjSKHƊ1ޫ$NERY(UO#͛KM읛ۇ_()̆R0Օ&=Icͧ6s0{3"})x'n$(4gu16&.T\{ݲp<a@P?rdw@cXpɓ.LgRapK{AnvALBXE+sGK$MF&PpEpYY3L(2ƱI6eaœF/pJ'XK}7)7y .آjXߖD nn8ƒU!w2%K{Y /x˘?e1bc]mGOK!բ cuppZ DCѹʝa{sjoР&R` =e-;yOIv|` <mpg> wV(K" =h\`55?J:*Eζ>wJr" jpIa8Ź./j*t6' X_nn)pT n4 e4]y΢_HpzQf i'v K*?9xڸ- e/i=;1zsyո~ 4͢4m,N$J=$T8&t{ҏѵp[G4H4"ג6Z1[ n0 3>% s|uLZ]!*EM͉zt'sY6cT.uc!z߻+{+T0CL`/Qju`/BB8#|m`rk@F%:B8510ԇjXްCIlz:cMJ7 k)K{] TԲ3f3od]Ia&k;kΆV\v{G^.~2oh}%^yՊs75?vQk7f"1{/Ķ,)˖W5$[$FkuU 6YA\IPrELӋn=i_4(!F7^.:_1$^y]x>EH}T_-5 ymX?yXaIq& 7kpXI2+ ēF1.l.DZ~It!P%Z)`ŪAIpaQrsF>Ds7Kʜ*E`(nB0!F ۯn68_HSӅ*i[O?QGv(䢸fIk iZ #=q}P^jt?3c L2 8E:kж^pIR EϚY1q&Ua47n]IKdٻ޶eW=EQInop{i{>r_b"#JI܃;K%eJ2i(آv>3;3KY{g ltv -?pTӔ# .Q: .Պ+`5?-0>&$d,?Ub4LͯqP*j5Xm….~?Ub'`f$%_b%Y|y{e:!xG8n+L.͌=?q>Q;ĔbT. ?,0?O|t~&7!=_LWsK7Y1VҪ̴!Ge1 o((Wd-U@;ii]l/_~=fe`@. }.?'<~-וy󫟢{U|hv~5Ltpa|/՗h #&`s*pl yI0ɫ5h\t,%)_0 df$:a#š h ?d|\ T ؎e# ؐ|j 72*.d lffo `~9]UuZ*Y4'`Ң;7.zKTg53;떤w`hWg8 K1ɇEE;NVa:(_O~P0d-@5 ez!׬Uֵ,|zXUQ؏I/]#`e*%K_xܺ++M5X['A`k]ҔSQW'@W+(^iE/7 (m])pd:)`S8803w⌃_96giBR14'3=eIx'euJ:8~p>Ϧޝ,7 g^G?Rվ+ǽrR wZac]arMⷪCTWD%&;]tOfڀ[3Ǘ/Id| ^;O5\B7YJ:9Y(_}z99?mrw*W])e냕`~}9bO|,|G֝ mF@`.do"gQHrZ?'[8Xf$9,iɗ9f'o|IeDy\ mL,ɕ>p? b}PZ] lZ[H3OBR*z2P^iB c#m=(#1)RZP֎Z|DG7a}EhPFO6C#34J[8k?,uD* dּ?T!Fq/AZ S iA/mI%+XNUc`M0,dwuaB rJu+j4_dM%gl4J>PI )4祓\=Pt{R _bd>;(oWOY}2CEՋݺN..ʍj)^}v`,\] 17\וƻXogI>;EI'D* 3ꐳ:Vjrw-P._}mj36s$mՋM9fَfo@ÍA\Rl{,Rq⹠qIPlh8T^2]*#k-u"F'`wPQݣ/^so!q8S^hoSXTɥsK6"ZuD_hUݴ!jG=Q+R2h!^ՆdS[ɎT7px2ӣQV= NgZˏjRPr@K7yx:,D`Nx@a5ipڣ# Mˇ,ӭiq>MҁcQh%G{SuzڈAɋ7U;Jv>q% Ҋ(q% F?Q܃a Girf}:xw DbcR–bQ[L["k-̄QDlb:_ >;]gg>=TSz;ErO?gao|rr$6K8O5bcCLm:lcc3\s#R~NO;?<ԻЧ\@6ƾ9g7??ľSeڱ#Pf@~ K5ml_m'.GNg?ל@9_s,0^ !aQԁl js xXUGVOfvDn M^~+!d~Rc;ǁxo|3oFE6;}sNQ |aSnsU(ֹV)T  {Y:QKXo14}4{=yvv9ȊdR&|܏`Ȣ/޽\S<\PBy' QA0Z , LO9prƵbأB"0/_2+I'qa CNA74q#pnGh;V aLŷ͐#2͖X(ʔw-:n-LGnhsiȿ?$"B%kBWcdr|W?}7\*Xrgpc9A /%|6p:Z3c@0]oC~ |m3myhPm 5mPeV ˽_x12y39LvllkF @\y} n#&;цu4)Є&F[1*xize476Ղt;wM|9_>.p[v33ʈ(-!b-;CmNFLpp>;=pvf+q5z+̷=fghC2hJ44լ&TK-3mܷ2F/@rهXQ M=phr92c?>>UFl$6WZ/Eg9b̖BmQvLtIc{$ g;q7z]q$Z29 MrlSH.xϩhp+H$ L`&f5b 3nRH>P0X?56$ Bm ߲Ar |hL4Bp!c 5ҪdAit\ƳlRj+UoH3FMmoc:~t*1Tjld QW ^!i fg!`4E;b>`ہ9s2K,Y_|vH@\zCb[2'ᠩ h itHC ,P5L7è<`Nl*XYB1%zt,L id !eXR4!0eE*w#,bFSXӺ lk֚BZF^1Ѳ%R<6@S3!Rz6Lk0S JSxP`.ipZʐUO1ܧu~ְS0[Y` +v 롳xa "{d.K7#$LH#I4,ٝ7ͧ/]svUMI~9oQJjJX`I=%"b&V%}J6#o$`#ۡJSVݾjN`%xմc i5ҘDc aP 8Y,D5NK M5ψRb#OјBmXA*'qƪZ%$@lg 5 ^q=!),xV)рw-m#IaRe 6N&H27Mf,zD }zPee#cSUwUWխ-Xj\Kw!D})3$OdTAaeBS>)˴ (bXBj.%;$Qum[ G~RXpd)h@ ќ􈑽%Gõ8?õS+&SW)Zԉ@Cܤ3\1B.qJ@.Ә&T/4%ExN#8ŝH`oicH'r/4:Uֲ!4dFqĄ)5,Ι8zF=ڗ8hZ%*b2jאF(۫aH"z4lIUM#-6w6{ƣT=;\')"\ћ:DYv$@/~(1r K3VH*9f!̫/LcG IbLXĒdD8 &q*#)EFs+9u0Jqm?-3Ǎ7ڥ`M/rP-Ťl+r>Vᡔ bzR~ V|}LqЅUw*҇.m`NDMp}f`aڞO*mWu6#~drff+tWMMD0(kPL!.Ơ.1LAq}DTJM,c46@Kp0f.e4&I2fceS`"FMEOS "\G62)UIL$Li&6!qhkF$IkpM9޲QS1* '?4YIK(f$&#ԄrҘ1ڲOSQFSʔ&- h#2gl##چD2 |eSiޟ"F'B*:ՊLF,IT̒HD)Jyj+S9h e횪FE `RBm@fEfp 3m[٤`[絞f;b~ɺfp93T@툘'KDh3v/4TQ(e|a#BP `Tכz6e B>iP/ڀYwa@`=RZMo57O4ES0Kv88Em#謖IM#:ʎsRݏ)NPPk=h\]s"0xּ7~v]Z+0GR@bGpe]XfXqw Ӡw_ƊpyoU&A77:ƕ}2:u+>vDkaIJG66Czh][ dӸwv.hMH-=f)&0Q!`[3t-)N0!K Zp?0m١hmE#p}EB'Y< VY< eh=m UqbT 3x2'j}nhv魬XVھ:V#~@>e1MJj9`a^{A-W$v%2sE<U~`3QhqLct޳`K:Ψhyhhy_/]vkZ Gi2Xn D݇UL ʎ(u\.z|T9]ҩYR?Qczg8|*蕴cylYmNcSoLqrY?X;AwOV^)!/|è259)R?$>{\ ~}yN惯Ahu4<,uwyga$/lڟ j _Nޭ3~z"0($T:E"I]|kJ]]ɼdhZccb?t[i^αf!]MloTO]%t<]+_c/۸얊!{g(P*M_/tmo/s6ph-^Gi.ᮣ_xNK\%MXe}YGX[?\ Yqyz7F_Χ?o? |< ]",O_5Rؿw.{W-1 l[pi zԙgk˼j?tg50hCFeB:,>ؐ@iH?~$Jy onFD߽'8_дF=HZ{}>$61(\9ͅ_U[BCÇAql" HI~Nc>^`C2מZ沈~>9~SYI٦hN.޺i;~w"ց&QGIF98!\b#l Hv=y;/gZ %=G9Z|%w'Б@~uϓ/i 읿wu+gR| |2>*Og8Mt @HF4^twB~pi8_8Y?'E e9ƿS{2grIM-ؿ唪]h Qa2 fq 42r<BIjx1v_ݬ 4huOfJiCw"5k р#Z* ur+Au%ӹfaW$,qf5"2d46!Rt&N#F8Zդ!]X`Z}Ft}mC]',%rK9A' 9I\ΦT;;JLnXݖ5 &[fۏX:H#*vs>j_䬉m^ƺCuZ9<p Md~BE54'6,f^j$>]oS^XRX;l b,)c;rk"`:[76BjlW< Lfmǵ͛echɼ$Dy!LKz(jXP/᚞/}q[V>ibƕ5I,;w N15"f@2XvQ~-0Tȭi=qQy4:L !ӆa~bz( пv9˼g:)/ɳ5ZngbpOh5-ue0#5ޣ`T{fq}mހz&'_el'^+x$:eDZ7l}mC@Ncد=l*үk[71xjw2`E >}],`%c%Z"?|ѻ,B? 6)ןT]4,cLuR^Yg&X`ƽEg{J[ɕۺ@+:~$3L\u^^E}Tm]'^ Մ4ʟ,]b>u++VMцu->I1{bVdD]f*Ix4e#;<~GVOs!b-HVHy B$"'YçeHԲah|<x5[|ם0>#*KQT|l3'^ <(i翯207TN̎]yD} Х6U~VjU_W >`#3~TAe|~q1Z;fe_>ˢ erhoe54(lZ, xՒp/3 L17OZh\Z+z\L(YXa=Do<=xRh>2IizUKlm,?ChzWu㟲?HI}"##nFn3,DnJ]NSWEb+-\<[޸sWWteVpu$01 2B!$09ex߷|IQˉ;Kib|Cʛ~w/oԳߴZoW[w}yíw|($*gѴ`cVJ^|Ғ1rz1qRFWeqCbQ^m}ۥIwM֎lx**o]gg yx!$ĢpΉӈI5r4"n qF8B#i$6UĤP_^+Jc$orsŒ y\".s4~V /`Z!W0["Ok[4"gbV/Ÿ@&Zu7tXxn^ 7ff `0JUg]jPwFYv? }p%p Y5*0p DyXk&.>/VxFp | hK4j$//ux*| ʔ_MXi =2cR٦yk|NvEϚ5~lt9Lү{0vf[QwP0vnJ͌G#yd<<2Gx_j4{d)i83xߡqW-+:, ܲÇS" 7~:ţTr\(vdJ8r,Eoꮥ$Iؘc,洇]mȧ,%%QTdQ,guŀcVdj3+m2AF8dP~(?dL30x-Q< w-n<.|g90$tFSQo7/ k0y%nl̔  N -cǜBDc~x9q-]βJC5*@BRXS} &dAN1Fr+A~.!sDS'q FSK \"R< oL@<4FTŚ:[wx\xp[\PhӖv-1$ZaRʎnFHba%ͨ(<p 1KX6„ B8qoEytC+;?}_%R.cS0/>k4 o+6Hdz ɟg~[g$g_K+v[j!/hpdzkz|/t9M{W( x]91uL;?5VLRnI3L ƺA*Љb-gX+1 &A7F r~MΈT ZLN„}ԇc!/ Jv:IKVK +\~β#yDiA 6>x8׼G`<=FfzrW 9&EgX?T>h*zB[PK"_п\jxla4ͅrbc^V0 HBUMf=S߰paM#_Yy՘ot+BM@I %[ƾ߆ϲЕp޷dqĬp|oJ$ ~ XVYo1^!DpX,Δ2 (ύ#($򥸺v&K1ȗ+O˓+_wo E!S+_?ƟQ9= X%zW3^"RC愌3h^CJzP9p;tT^T#e Rtu;uݏq>nFI nBFǣNi?Etqjﱈvr{YfEεP]ǚ(!|1 # #qDZiv V*&\>DM~*紪m}/nxcqkv c֗m}6tTHxzp$/y1jtA q9j޲ c.N]* ^4 <#DZN9mq5wɭ҈w )'nʃZ2p SF]eyI3$D"ZԜ19' ;{c쏓ȗBЊ~?v0>NXuְ~FCȲrt]%O0F&XsƠ<2n"/ES?Rl1\598%XcRQ43,_"6]z湺I 2:nr0(C@dpqy'$Ԩ x7W)wU_nP_ԇT0$޺yoo7Yd*h5 ܀/*0)how;f^6?MRȗݰzԤvJ\5 ;jF/2}8F"_r+niRgBFa adWFZp~Z٭7->/~ICF&dt/׃S=g TGx=WVq<ڲ\P%+ L$򥿈_/x}x;l0|jBFSoix94m}uhHVS"mˤr.0!c.xER΅ϴI;nNxѪa$|loa&UBFǕF؆Tv(L{_;Q1a8o.1t\LP$0^dZeeNJ L9`Dyx 7j ۇrA\0O y"p^ШDTy KaYRǦv2 HqD rrwŐ$'!YuXZF2F&K:G,t^4gH_`]<}Cw =,db[XBjLxc9|) gdI9RWl@W,)(UjPvPy1Ow#<76H o{0o)at[-_3r`5gH]z|d3aL|) GyG${(ܾ;2atjvb uf:6 sagԁVNx5blgx6U6, n'\F8ڱqN&$7izr3^~bV9piNw?n_j f-u%81o;8=TeFkL5|$Y )NLx??hSȗBpY~!N|w"9kznw|5#`Gj|q}-s'<*Ϝ3o:8<:HHK5ݩ4O'&dȳ 7 \ێX ʢGVОFY~W7:i;#6J:!֙JW31K?ceAO/mjs*g_3SI2:nwȕF+:!V-jy]6\GQ=_BmI/ṫ9:g/ ơߟ,$#V&|)ڴ͊2]I吴!UKC[*[Si\:Ѥ,Y. Jh!=تa})!ﱕ3[/ݵ:_s]p&ق#=bT~;谄ɦ__[(߷wa'HN xHvsiwV x|Č,d::lm)= HK5>d/9%G"_ eBpssqRo'gE:^6RǞ*sLVQT鐓6pOx Sb=wiF]#`"J-U!u0"N^x |)䊯6~+ۻtO?3GڒcǨ{ugN w;Ŋs,cSȄU]??mSn0'-/xJP GP(̝)'rR]Z3?}`'^],1 /sZ;xx1cr2%5!猗aY|ZR0y1>,c#*X,ёx_$7gT◛YtLu뎮=qqѤ0rQYbFA\ ԒgV|W 9&{\ŷ7 <Ɔ3|)+^9ԙI/$Vlp.T --$7=lh.o#mMiK?-t'>H㞝p_ ۰ٖ0h6#Rvή)O/ss)ҡ:1zW]1dKȲrxԌ&= C{er%9n&&/a?_9YҶvVTߞ-eаx;<%saGBtY"X1&<=ƘD$_}m0qNS'j;١[N{}L`e^FV<M|Z9VO8b KpZ" C>ver)RvK m/cʘwU#zWӻtY?̫za iR4W7.ʗEMfS}2cI,DZxp|J=+7!l7E-5"NRZ6D iJ2Jx:;҈ S{jph ;hxu^/%ĉ7XRBSqff4;:^_ɲ,&y}0|؅\JD)`4.DZ >M#ɹ4pro႖83׬<[U< XMBYV`[%d +xSPӔrxb^^]?ުݼ=<ӢxEp*o=>KQFb}.Ws=/ȋsQƫB\\m™ 1YBvR9T5'G׫>:XUx203  Lj`_63[&={Wl?lx||ϏPA?4O(6lGF9O`t~97 VtKu ?{~ySÑ~Y\4Hq8Ir^l}ꟗ<}a $a$?oޕ O_3'xm~'M^eb:|苴߷*['v{Kg Աk_XvO+km`g '?hu?Ä1iQ'ܺL'ڃmAz[L2:L:-@:όyaVAx-/5~?Ou-MUbZN+XNlvܟd;Sj`Ǯ'Of<0Zг\*`:6A}A)n{6˻\)ݾW<嬝zKZVgÏUXo%/}n6Rԧ (af6i<q}Upnf3:gy,0aNQ6C|҅^T(SG?h;kZ;Ta x٤]a`Tm]",XoIoq֑kĝ?{VݤZ ݮx\ƆS{`e8u{ ^r_!1O~l.X[cSv ހ\'v[vk0naCֱ>.ZB BUJ^XÈPQÌE E<QS3vcg%;[7yLr3sek}/hSnc SV`D'u79X,㟨 0 ~ZGur l_pc DEJ#z*.AW0C RIadntzRČe/aoT) cE.6Lh`Ko?_#>^vc:U,6}߆ z&ŖseUr(hZm(]u1fb*gLUdESO@HjKq{n-9Q}-."C C|:MP׍C(AΉ/RA eO3%-lUd dUOgzz"ҢH#1ChuOA}⯽do5|tPz֖~FŒdҳE_44(B5 /E+RGWҡTZrkxEՠшPQnM /O2ߧٶ/_*(Lָ-8tup>K JcӈwJdiEJ| D{:`jiBFA|0-i%c 1pU#(=a Ȩ1k"GtWP\~u\1#4^Ty X}jC gX gs.ۙی*fK+s&%\F z13$W ocs# cMUh$X@6H`7KγQ~[\5i.0B F&Y +WŻ O4:߷!ҡ :qq10|dKƄKụ,An bKc%HUgM " 3Ҝ8߽{7i]Uzrlw bE3:7!&*7Lf OC`/|Hx\d&N\aADPa6tZ \t rwo7<@R1oK9O149N˃}dh:"ubي1˩3c ¹PUEnHj5rt~a>יɍ |n"Qkcu%VЦ~#V<(S6zH,1{">*VZ%Y7}Rf S0 gͣ;e|$e=y; 00w`vaZw' !-#HoLh-fAySp ef.HaRolKζc VLF45faO"V)g 39+ZM!i4hӈ1*ô'r?ײg3_۴AHе|F2D`:n?XPG??.( ^ƫo ăO/5>,6cD=Hky$r6LZaqWeei5Wq?sdG7~$[GbSvIߕ pJvcpiiB3= 0~X!3R'0WQp%Htq[q +8tYxCt,xK0֥$*tɶ)un xtNx0S|OI1Vl75=.& UɭRǑw8_;LD`;3̇it|I߀׊䤻߯(ْGL'60TU"YEX'[k`Cy &RӶ_f}&dӛ,N 3~ץ]'hFѵ]6*0^~ZLi&`@puބ|{wfVtn.}ss0k bbYJsXIfC/5J9eT2+N߸s/(-Νh&N5-Я Tef;oˇZ7ӲrR|b՚] ]?gw,%M6Ua˪G#ê=_Q~\Mz1]tŰ#Rb45RX㜤VeR,(Neڵa;v0ÂnA~@'4-&y7 wu\Vu~Q')(LLڨfPxg˨8gKu8o}(IyWn~)?`:wq|hRzGk{u:YVOlHi z*G=ij͛eb [[;`9] k̓&M6\mbl1&gj'&?.Pr(FonP{j 6Im$4qX`wЖ~ `薺hQi̭B.w*оz"}r{aYBoDwu=p".cJPrV.oq\9DlyTg)% nUXYn?GM9\֘wܨRPdιLE')?U.ȸ& `3}ϲS=0 ](!5$00|QsH&S1jM̭0惱|ÕAJX "Q&u%RJ$Уyi,pl*TamW7e=7e}6lE =.%HQf((J3J-yjv>IsvJTW1WWc<3Sޖ&}9C :*,vFѴC"9 CV9Jb(K(!$$ْssm[N 4k[6ư,C]p}=\< v'I!eY::xv6 65:HRZ0.֩rZ%T~~^=jwVhL@5i&s{N\0!CXQXqjpuB5SSNCg$r hvmܱT{N?Q\I% W_]~ˇ7MeB0Yo  t95+m' hNZvIzp`pk6Oud;YK9avI{@XL:-nᛝ EZfL=Zq W 6UUN6Ycoj$0[[]/`s#|bP*C:9q>g4 2rjxu1aj$Nc]DmD5Yy|w^À+F[ dj;?Z⭖p[-p[ ~OӮC:6JhsM<N6x*Z ^ 5#<-puʻN/Ob'sl*Of5*&sPA,($8|^M*'`Iy+yUk[~zжmgc /P^?Slngc2R  DA yBx f[-jO>mEP˕ûFKޫK&KWe(3l=Dd=1 i *@q]ͼX#SJn6q쀃+'ȃ^[GR@题>xgw6G&bu2Ý{U|AEp5Ǭhc&fH(]db1 S7shXEz` }wڽd8GDn$$ܢ~}}U~}-tCl(Tqœ&e+nEYԮ$0U` ,7/pA_ogcaexmNʪEovoFMݬX}|"WﴋiSuZc`51=MZt:8ɿ8x֭\8N89Y1ٳNyss3x1g_sP9T'<*A W )33R-1\[\UDT9JSW&𿱛[OdhWӧ~Ѱ1Υs ӓ CThR)i,eF<Ыz~F[6ƘEsl|RXjzxy[_ՓQp6!svzex\ W_o  \Y)O۪MnFN ۜQbqdrXhJ\ԆoV1pײzcaf-`̃ùW}F$GdV}X/;_u׍LLq Sf>S)Uh'6hQ|GI+q!3vV"}SCBΩliܦ~z 3&xp0Xoʫ=fwzp%P1dY#lk;=m 4i*~XQ#t O\ rQɬ 8AOЮgJwf͇ߢy/hs.CVV0vh2'hHb4&t3`BU9f@wUVfڰ1: Q!}{I2J<.Oc l0S\B̭ܙzy<|~GzQZ_Ar]ۄ n6YEJLᵞ^ORkl1,JEF1ip:xlQO5Ȓ+k^%p'phu'Z׋{][*S0XH'\̌Q"Hn?Vk3f^gVn.}s,Wv߯'iq>og?ܙΝ3&VlGԼ|0}_yfZzJZ*/TB]LZ AǞ³`ΚEi?uY5|hЖ~/|]|G]T`)X׽u@㤾ܡXvםcQM-^T@6_=T__7Lm}^-2]} ߫;0~-)i32DXMKE-I˜shN*% QJ = 3?}߱t瘷0]V L~@4-&y7 wu\V~Q')(LڨfPxgۚ<*2 EO!7snQǽouK3;T~vHv!նK\-U83D$׀0Vv5 \: ְ'6[]uH40e V'︦\mbl1&grÉ} kήкҭvQ }Ottw'ǻ;4怨@>Riϭ^6f(U<.J]v|pòBqV5 Yu}uqV+&ǕCdDGu6^Z+}qAsQ*O,!w1f CB4)N\5\rfb~g jу^}ac :2y'HƒLhb܎?Jӗ">RsilA,7:CQ,dg"o G57Ы)z}Ԕlc22HQMH!jFvw\8aA:>8{ ҡB7Ur0;V2h&)Y6cUd^[A<}}]N*Uㄧ@F 1R"j1)5KC ;+ƃv 3ͤ$Ts-V| ;fWHrnU@2]b,WYg` T_8y _pk3ñ GI^1t%෧Y"0ƉJ?C/r;N2pjƕ@6$jdYsŁ>ş-$s dIaeiT`_6[8 B%ɻiUfe/H/Nr 7R%"TPU_;^w? =aeW .R#?:$NjӲ@{ '|A2Z!SKA_{ c6G_,3iAa>E\:,9V|}mN,mDFvtL& a4jl!7K'B133&2i 6 nGRTKc==ϱ^~Z[}q>tCy=~"i&>x%t~b+{YFY/sQQ-OFyҞŰDQ`Q\B.sƏm^! ? DU&7jlE;'8 MnEbpvt6c1NdN4JcXCqBq26Zij.iX8m+t~qMSP{SW-ػq,W`4^E*`/==~t>tawe)IUPĉru e#yxgSquQȍMy ._ i(.[e$X1l*0l*`CXG@>aKPc&8fmB@}2B~P8ʊpV\vņ9Cd A=B{t;HbY$U$G?[%Q$P! +*DDQg=KIp|+w(_](|O:$gQ ,e l]UU".BQ.+ ٰ8[7I[y8:Ffpqjg x2J, Hz,O({xp*]t NGӇl==(" X*Xe^8f=h/ N>\kA#38 DCNԆN㒽](12lG^1_!]ZM$D;g'O]bL&;gDckRᰇeN(pTEʊ3f@dke24K|BX'r蠑<+6;g$vXbỳ Mj=Phd(ldBIx%*{bwMy5Ђ!AeMQ@1+Re:Z6`Ѹ(+B'ł$ ipOeg`s(@"CGmE]b;h\ۛ^_UC"t6wks 8/ԣ(rQS*Ym𼱱S(U#J+8| V{QPЋ3#A^<1vy!XEF"P~~lB42CjŴUp&'0"{pUVU(0{bVgZb:a)DA#jp՞ ƀJ1ӨCE>AN/=cD2ftYc J=ZwN+QIAۡ9pxTYؼ(`쭟A- @0!vXK%ˉ_XQ4#QjDy%>2kn (Vk hUHBaam*9MkW>24#H_XzlWȲRpVq79:hd'ANfb3?/(_= ^wpǯ{wQIz< n*&f+(&Hfx9t^(Rq})9>{ԃY42Cj9{kvPtwZ{2„;v- of؀fv㧱1fgƥ flyόp E٫t z?%ڢ,$"K+BE4#qz~aȫüTc ᒥ18.8+o#qgL3R2HH- e@5 D8 ,]HISk-"xH*H)Z?ҀWo[ !E:m*,M)$Tv$=8>c접U4eQ /ٌl x.d)g1 +I8Q%׆#m;t_Q*^铏7=@I *뵢BRHWQ= 3 [ow]Oj~*9& n%ztL˺w_, 4v *,fuIͿ;}?${>qrw5./v7yڧ13?KuH>XCj@gP>0Gw Nɬ^q?vm}Рy(!w-,L4(#F bTyB +aG|5 Qj>=OT>Ubp%!*Ys%E=g3b 42 Zv]0>9*[ٰQHP즗%hȋ"!qBu$)B[B`e$ԶI=x[G;\aEՏG!DP`C *m(^4Ue匕A+Eb<242#VFŀq2tnn} NZQM |HdB,mo [jE#38 M2s'֨1H O5R(UU2yQJBPY=rP$ Z-$aG5WͶn&hFWقl?LE=/ΌDކJF?D D"vH w;luO@~{ l>O;m1FˋRka93*l1arjK%e3DZCNp<1x]{Ez1M.G}(.-J9dy`䅖%%:+R?]X}?W+m_ڂ̺7I@O2&aכ|3gk">|jQTJ/3+*k.R*c%DzC,R\;!2 Ԧ@_SڅOSe1ϾǨƤY BGŪyZ S˴ ۊWLRJ̗"ZSd;MN{ c~_uﵥu=CyS ͲkW{~ ӓ4Ogۛm6J ן6?o_91B)©'W(=*5%/Cy%̄׶<Wp,Pbݳ G:={O݋WIpB S ⺽=R滉auD8H^XD"mY ғ ^[_ZFQGzt7]]KeA5,4̀2/m0l}ܷXWI'iyJ8@R N)S )qJ8S )<qJ8@R N{qJu qv `=OOS)z `=mS)*Nbe8S)z `=XOØPMlpw} H C1Q+wS\%D Ox%2”˩^DC4\X]-+5]yoG*>xM$8AzBO6_(!)仿CC8C]Eu &VRnR1vy0g:4‚Su`!XG"YDn1t@B[ߧ1Vx=]Om{XtiC*{`לlX !H1%#b_\eFئ{TRO3L yI1ؓB`K~zH83*E,sJ#68KF/iR,A yd*V;xkq-s2˂wXTBE`z2*Sl/vc˾sR- :;4[K~ƣ>( ZH:m}Z- u6m 4a@}Z n0Imxh];?.c˘2f,gX,s%2V[NEyGYFDf0>yh}hP3hẐ`]+8Պe<`yX^Xyܱ@'VӄfL\gD`e@E\fM<ŊeyiCxiAjxp"Oke]ۿl*@&5sY*h`hӆs }Cs~ПDO4;|Kc>z'[gsO~(ZgIPbۅqdl_Cy XKV xŚ%k_ooE\r+mL90_1K6}/]Sus\i)Pzmrvg0c^|O_Ŵ?ۨwr7y@(ZIOqy"_jbAS#)p?/?'W/A)%sE.HZ<}vZ^Uy`\.6%^rKj [PD}g; %n!}W r4􋥇A:KFx]?&s6ˋN _!̥ʱCp?qr %,ZU믓&7>8Ty/uEJ"J T%w~oSAQn!9<:-Olht,R,՜=-}A8F"LVs T s?f5v2ž瀅7lO@n{[ t~pmXy-K`#ڄ Hܾ̏׮v|&𴢍sKnÂkK0K/+*̿@ \+% ̼O5'ćk6l}$ڴ 3A1H8o]L|8EcKsc^'7`GB9>Q..6@⧁wlMq.> KRy2͊[7n38APȵ(0O0B0D2 d\Y*=!xxj:,<3Mi ӡi͍hʨX-Frz懺gEIu`Q/鬲SJf1 ,#RaSQE H)ʭlD g=i2hݠQAK$VPNvQO-Xjk.@߿zqo$'I lȂFXeAKX,k g +$}4V^.3qVg3&>֧ohT)^BC:6QL-!.p?+hIjC3 G8^FrAi57ayڜU^bP+t xD B\&WNbQf~i8}AG+46FBsDWb$D):%y2?ç RޞӘ_Eз-?op>1q:1cBH0&##ዦ^n"-Ʊ=ާ5=9D32e XQ¶RTT>Q9 S3SP8:k'C "s*g22e2mRPZxu3mпɀZ?G۲W_SUgV̏S2?+IQL̂GsV9  lcf*dej=LD7 X3 #$hㄑJ [Qp+8@ )&8qqe{Fm)1G\R[ߔlA7,H{3Ye)XOҳ]o-o>=x&pIcʅ.7*^/-_KV2 L!!E`3ϴ1P /0!ąz?XDf)W!6cB,7 :m:"{3-94Ww4\JX|nk9VL#h;@syP/đJ`:]A?w0}aH$6 G3&}U"*{p\wI6[-li7IM~kT͐"sdG; gپ|SwUp Ů߂o; _u喫^MmZW4Jm2- HC 9&rc6ޡy7,Ƹ% kA|s@2@ m2#/S9Nࠌ Q { +aP7dTz==&>^5~Ru)X2d;Cz hx$[ʭ3\ p1f44ӎ92,PFXpS:4NQțA.buq0]`)ߧyF\ehu(d+j%޽@=]uտ]wW!# -E_=1ڪC[O m-&#Uj#׭0hjώuO1?rKıiuT*f4 ŝWVUB ԂrzQZ.!a*$|p#E-  7u1mWLp}R)L2dn|,bB0)uL,5T"e{e^lGͼHyUN ⣢KV 5f h ^q}L@SYTFġ˪^V%wɇ& `,P1-*w6hB3$Fa3GQ9X`A&b;ţ6Rɸt @hL*"̰46&$ƒP^P%@2mH7y9[Xk}N nj u/rP/<eZ +HU1]sشnβ/kUV1b?ak^%H~hϓt7U(k'dzQ|5Q]xL_摚 Ohj?gy3t3,9]VSg\(uQh) 򡍲y(_H,+~y[_7(Y1\"\1 k]>PăNc,o h:%oe1qe'"L}g˧kn 7dfߟ#z;[KBc~;y%Bd1(bL#Fv{1GNbjOt꯲ թt~mIEnAZd\Z$]в3o/tΊRFb9?129y"GMn*;y|Tf^o'Ӽ >_y;gihR)6UVEu~Wȴn^0y+3+65=Q>%B0E\2p\MXtOj8b]41NvӦNÕBs]E=Pk#ߝ5C+^Bq^IlDzųWSw Rliym,W֖ e-V!0VLUހ:?: (Uh8 tf?58+[%hwAVZ5V(]d/$$6!?Gj dh)b/^*t>{`ǯ]>_˗\b.Ͽ9|K ywZѸ ?5 :poEӼET:M6G='xvisNWb^yiYs"ˡ{՟ GWy(ZlĿPCTa#m}v9ȯ<$F0mGrmpaL939k,ٻ8n$Wr16XdMqں3ٛ=Fꎆ$4=|bSd$"q(I?\>).SJ'5~}L#LA`A:ua$L<&)wPHLEJ'Ukkr2d,凋#շYL'|t.*IHYyVNY|ϣUJ2%>Nҫ0U3ЃqH,pdPdЪZ[.!ɔ!T`z9fk 7;  !9֗Asp-!zÍF{Ֆ/fgBdJ) y*:֟3$rAEˣ(3YB SF^\.~{h"Y-% RxFq0y4N*'|ZYI¨XlDd#DW &bMPdLM1%4)\2FRtJ 0(GxQYsi]|ô>`LY路yusa7~_U]1wcbq4mJLKe9|ڳdۺfm">FyN3ady`By@7E5GFu%z9-/ꈞ08[Vwkq/{^[ȫi#7l7?~ewpX}pCg4{Sct mf7][G Cc M~h`V#oM_րv@_20d/p:R1v;Э-u]o_?v\;9fDCs%Ҙh- KH#͟AzaOcIW M?_ IݜrqEU.板c ^QF[~޿)9}>xN,_t}U?k?P4@{J>_k7?YwʋmO_^nUK 拾V7zWLˬ!sb_ _ ]56zl'Nн_.{8/{nn/No=+;GAr;U{ ]U6;q/)l0[ӻ}[|^>u=}-ҭ>]ofIh ++O~W~obuJQn>?5R>7o!cׄ@d %1AǰO/8@v8}n'~՚UyS*G(km1X^F i.nO^ٛ!aƷⷺ>FD+]QuJI]?noѣה{Z^y_ןB&,=+d[F.+dW.~bWnнQ\}A]*U=HPψ{uFr!"VlƈhjgE70 Jb=ޱ =z!\|M Ƽp2F?\[1ޜu&m 츢=fu_M٧w糼T⌾pb\k[Iv?Lr"@4~1~V|my#sΓ1G7Õn~sM;7 9•ì7mUwI|-:nw䷳Ed6=O} >[?&O[gn>V]RlJ:tvMk>l Ϻ;tr8;}NOsMaX=} qv=kM^ФGh}lw"1 Q~XtfGa(` b֒gnStx\m./vMLI1 dPQ'| +Ho_+ ת91N>D1x Y0TiRb`8!?BB N+|.ThEJd\oSv5w[0sȵg 34ü$ F[F+@An -+' Pd…:c0ڟ3hój`ΏN4/x ma$" 2ZΣ Vܴn FJ z6W9G_puc$o4G3qKiu<'M\C%_si$HaAQXJB!ZwJ4W{rE\hR8dUU֖+9?BBVpZ ^3R.Y , [hX>Hh^ hig dyFhY`l%1ZGmZ2P,jk BzJb -ki8΄(E!cR+6  -wɐ(-(YY15H'#4HhJz1CW3,qCZMƊ@PL QP%8qNm͇?i/*B(鄴ׅAkc$4ooJde^\FBɭ+=H=~XB R5_3!}2`k=b<՞|UU tpRXeq``2 [iǤ|ȿBE5~j; }h"il* 's5j #x#yfzy_"S3Yʧ9?FB VC=mP) V:*˙)Ic":GHhށm x *z_DP19j%TE1gѐҒL !x 3%z(uw>ya) c$/dsWZ!)j5QμYH+ @ M+AD)3s Ĵ)H m$3|T\Yi 5 r<Ō}HBz"Єlu PBe H $뎑j XqÒϠ+TJRXh1ZנZxF.sZ 6s\j䌑lq﹣&,$Ma%)11l#Śd :LI)c$osI RݔqqɂCpD]h=Z.Gf\OQaZ Gy # xZ3v\sVq ,ׄr2P +wՌ%=e=lG֮{f{^xWF-`Ƃ*ی2ih[qwky.)k좹e p9/uǢЀ.NLW疛,LF9$VgxݿZbh]bwާ<~?ӟJWK/uڐMͫ^TMǾR{qU7_7ΡVY@.,6gQXڼ}yE2掠cHq9x(2d~Dd81SL"bGK鰶5WW:Z =7][oG+^d_ +;xw9'":B_%"($e[Y俟š(qDrA8X!W}/h$x!gj98jG=ˌ4W_qeQֶZk.I.е:# -"|SHk ;b> i Mxdq\B6o)E%AJIzE ͆E(`ٷ<GVD.g~)IKH"vN k  S"+-q\;"U$\.0!;l)*R I,< zqyVTL >U|. Dn!ٻ+p8ǺuzoXG4QqVB5Zp ϰ"ᕚӣ^%,RɳId XZ$Fab5mS1P2)^Ya.;d tA) )u htL*"Q* J3fa,A"*ʓH&ȝ$< { zOIX١uk \b ƜTДc Fs= {mH2G}iG,H Rs/kd\%3ۢ^ۢJH?e;]_[$gpKia2=l{I8%?R@/f9mt>]/{kcǁdB$PB}RPk(l=6k.&Ϧ*a[,6z iatH\o`B:|R~8YgkHygt۹f.ǥgS W p'ԫwS%FY"{Ԡ*$4߽Nhgoǵ z+CYt:>?vJcƭAuQpkKM _:qu.ۓ7ަߧ{뷧瘨{ Z;qU3 D;pN>ܣkTߴk6]&G=-U!{[|8o偶f JZ77'UJ-oҥ\g ԵL߷+@6yoV M(q*Ŝ\A< مK۵33.tx[Ut+cFq\aLy- 4Gpr`ƈ2[GnE)=E r0O /I D1]t;9>y%$zKj)VVs[."e^i1*\jjW;o޺a1X2"v2>Hbރz5q~жp˿6wڰ%,8Km# xʠ oň&<ۧ x+[u /MD Ri6Jũ23Ńd]=jvH<7D4BYIHd4Wʀy*F!=d VRnl'=[ɹ'nHOsĝztXjf֫|a=u;nT_ӻ;^laNO-*njR#h!uwFI\(Yf>!} &&VQRƌƁZ 1b"iZ+IϑDgWuw9&$ۇT@A!3=/?`ɴ-"-4|zzr|94x\yC 3\Xu!X g9pG E![ءh&BQLNyj@> gN!' (8 ݬUϡa9$)ȼpxaR[EJ!JK\< [#nj'C6D!xt6T^Jd(ZxQ`āC Mw,{ 4zdIy:BA0S!ddtF1*e%HT^ #S$y!5Ș;A>[)| ŷrŒKYXEL!Xje[eL#ҘjhqHC,RN`4h1' e 0TO=oY{|ԯ n ښí7eś6xߥE9^iiןc`BBnUD@|g&kWr.*G*Є'CA!s同r\9$WɕCr\9$WyPOXC6,nrMA}DE!=8CxQCx.hs=|UM2a)P){8"L;<1(.y "Zy,\xƌϳ~X7Z$eYv{/'t*\ԄZ$Fa>,%GQI>Ll8pg'խ#@JEdYYafXTqpCAyi|WhaP%x.ºZwW1$T0!X;oDN`)Mz) ϋ 3j^6L֧K"OV{VɣV@ۣ^@2O!EԾB )(Եhz{aCWg@0-\P\:$UXOT70!w>Iu(U$q62~\7+Sn%Y}5k^|y<]/~TP O}y :n`"l`5KP!ax>RUR-yZVz23Ÿɻ䃓Epg{Kjf삡z\O.M6iRxmOW nYЇܣ`,&M>wo=uƛqmmu:Ⱥ^\^GN L%$ú}~ D\#ؽb+Yďqե],5*65*'~-׹vϧoO޼{~>wߞcOqr~5h`۰Z$@:pyS]CӮbtd·W&\mtUڎiR(=k}tFUɠwnr۟t4יp*$ЕoWl,4QR*UT9﹞/a0.\髹<^ _xJʭi6p1 逰 9H!b XeIs7N ҧ,m7‹VsHz`vܨ%hCmxr@pqu3(wEX~* ĜŹ`184U 䰋"@ F/pJO03Hzq xZ?hÃ'X1.p'Yt^*"ڨJ3:trYgHryI~ &uDhQhcraLf7,3[_}ǕQVFYfra V@~ekQf$["-ƚ#6/V  .9kz~5`_lߚW$ %BЈ "xgPPJ:F\$ܨS ɧ0h;Y])y'mً Ǫ(&gVB3 uT:a-I{֯hXAZZ+wMwg@V,6>f8\shZlkѨN^<)::L[["z^.&V8vp wzP$O zj`AX|+ɓ\q|ol}8M}Tӱͻ3 ׮& wȦ[_{LcGi->cfgHyfJox= Fz]v-}SE2[MCz13(todEo*\~%4=uwgfk-3Ȩ;)ꡔ#s~zzy?ٷ9gC s7n'pR7\0Q~?vᆴ.]M{F ps9' A9*uB#fJ(K#} Evro{#s mQ2y}C}̐[U 6HgZ;W,8֥N:WX8Q@,Z Bo|Xfģ¦N) Qlv9B H@:^gRC{wU5KY˗~e6҇Ewץa'<\|jbX1Vhp7a>|"rXN=@mڿ[)\ظQv-6_a[Ba2]ēNYO f[ ZhITHk5|PBFi4h8п_ Boebq\༴r*iXQwgDu}3jX~XS._v r}8=\\/z)׮V oPkooΔOz?-./kS}+eO^#-?ed"9F17Fy;Oe"ͣwjwskB/OI̡o eՓhv XCp1kӛ+c/y7l_匆g].^??S<Mso -~;D$˱e?xD7hh{qONV2z{EwooA_m5)Uy羔y+Yf79jP_bw}1OO3xC=ߜ~utӷxy}ϱ'Y+vk~<[MUy'buL -gӫaU;9l'}wW vu|Ì9Yj:{9lأ9xOR;Q[WS8W s$7#1%n_ !~ɻTTm!^ܸ,/IYN_;*gױŊkSYzx{-kןvg`Ǹ (SfЙj%x<]mzvuYB|G4<\\o2q H7cWh_ab>4joZ?/[_<Ōߘ7ܝ!j7 Q/v.ց{1f+F>`v0ɪ`'BQlms3{?[&tyͷ#i!~}f-9ˡ7،Ù[/B 5*稵2Ui.Ȣ)"Uٚf}6˨ /y?v4_ϟ{P4,ct-jcǛލ[_VRE&'RF⼶}q#c?1E/&xqV#ߗJ1UB"~2ʸRQh4.,?l6(F)ǹtԴͥM8sd(Y*6KP-mQRR(OvNk]Vx'/CS ThF I݊-Šc0ɺ줵ZCW]@RMRk䒔{ )0YLuL-IRwm&k01M)+ИdZ?Qbg`pf >;L4Y~I !Sm9Iix*)Oz3&o1YSh*ciͻCsEG!x%|m=2` &lu0bȠ1T/ώc7,1KvdE+uH(R4֧D!۪2X IdUY:QS#J̱6gD$GY0ڢEA2{g`¶^7J?gb##+$~pR:b@Zqڀ(̜qTʽb>I+y1뜵'Q\2I&l.-6JR`ƨ1K‹eXgM!(dGoUoTddC0!_& E2USzC6"ƒK-OnEAE E'+sV@kazhm~ՠӊEAh7x(Qj1PyUc-+eAmenX"\s lNSb"uh0!-CHL%eOVe]} CP+#&!)ٰk50 T74dkjAt[T9OEP ilC?!nVor3E ]*;XI2,#<"f .7 d9vX=*z]C? +h#1wl!ՠPwSA0`\pq̷#0s!$B Lh^%2Y,5% A'ŜEN0tBC\"0eWaLJ.*)8ԙ!Oź*QLkQ r ֐thB@gۋZ *jo*>LEw&#%R92rܤj fY;KPA7/uT()%Ni3TW6!;'d2ڍvϪ J2"F_\lKZBӘyDV*AB)QNK'%㠄jNc#dV6D^;49xQ>Z6iⴐA3auKk׽5 HNW!*SN.0D!0ffou潦 G&@ϛБߕ^J.G[-^U}똂[$;঍5j,JPԒ"4D(2 !YF}/IQp´0FƁ~x }IEHV/VH!3Xhя¯22PJT٫૘wFTRqn[QM uY'O6Vm{t^w\zgy\E4Χ,}Of_ ]{ #B|u)]LLAjy1k"!p9X]%\Ha("Ft3 u $$BoLAQpY$\h B;ڱ-< %DH!Mv%ANK]`fՌ&"Xj5YҵD5~ӣAߔ,EǮݴ6 3k)Ȣ(M6ɀCV=jw76 *jQcQy(? >&@JY"RȡhjͥV 0]0V;,vMCHZITf9"5Ҧo[ 5i"*^ "nBZBܭE ay+6Y )`6M0ڝ7Wk?]wTu)XoL0H5KBOZtzzq%dY8 ׅ05\F<% O(N?~Eښil;Q(c]UNfyG?E>lϠ*P';?$noH /@Rz&^# ;3$@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@ eH @oI dx)z$B,@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@LZHA)#;檽9[kK'A2 I s^2 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $%T_׽O$ nH hoH X_> +e5@sbI &bI &bI &bI &bI &bI &bI &bI &bI &bI &bI &b@wCn]ZԇśORay}\\\P;Z.u|zur$i%꫎\F^>+gp5K^JQ,KzU2W0\+le؛pU{$^zdul(X>%nm$f 3{\0v'ik#KQr_5II-ʴEI#fwuׯɂBSr̞'z1O/{s2 v9" I~d6*%ypv` 2%hU 3+q:ѡƪ~} +*4"fTQE3#PdE6q>epx&@xI/g(]HoKK3xIRBIVK)A.UJiUtKCZxwՌ&Kh"=M~3A9W3"3E<_Q͛4NW?kzVKcwaҔiڕҪ)tdN6XR2$]5țPLK}k{">svͿ6l%N{lжO=RcBkE\@>Gx|zQ%ahg37߆D|лuc&DJe$YDNR L"JE$F9F?QJou7t .w¿I-P~8O/tX5mZwiz3iO47$=7cJeJ&!N9F߼*]Gc ]a~bf?/oPzp/--t9s~·)WZyѐy@<;~O̜]w=[5,߼/\0W|kǽtx~vfbp'63 .ƈ"hC9DU++Á"3ļ_Sɉ|*/ H-~4P ϣ+Dmm~3p,y0.~]u0qijO3)IJ52/[d:j6]jg5twg<" 3q$:T P4L WFYS&t !0ꌜ]+}Bjڎ9Ht5E|0W@go-{֔s 1oJa>S(Nq*-v33*6範Y Hcc[ױL#:w2eh ° xf5(vuxG"k;Xл6yIW{]#xr_ڊkˁ{ZUZ$`΍?pP+ vŜ'6PY`BJJK,`]2`+K]Q;5BGkwVtڂZ̡8-Jvȕ!nP`F=I-)rnj=:0#*JR8r y}ȰMڸÏuyKjljb 6m'KJ{N{nq64oPt/,Am`pƝjחrz[Y/3!xfB3DžGiVWسj\Z{9㬶6nE/SzXq{)=3wv,KL=YneM*BKxB9ڭZ~VMlk;mơ2ٮ2Y=3{y9&|uPg9~Z/t_qq蹭_z&NnMI\wLjc{2v9 ]yJC)KN cc&˼BgT:$%TA-<,.CoaFw{c5cqR!87krI R$w,ϴʝu{ ^@ҿ 6T;NdQ F1{6/@sdaO92t+[{E#ӗb9@m?K8RKx$)'XQg)%ȥJ0Z)3O*AȖQʒ"~j~92:!D;fh㎜|]gSu./E͍.aR^^Iũxp*T@)!ĤT+*B3\lF8ulY_|uDYG6XAeh-!T>XDFnp"M$6N_cW'?t~`(zG=nFpݶi:|:+w9޾ٌZŶK lDBX@A/ 8"@Ylܔ{YY#șWW_|WxPx\q[UeL\+Aw3/Ea*kySUؒͻجfoH|3yi|ϡ⧷IydZH ۷Ȝr4&0:o.g-䏧^pKF;K0bB)q0!JTKoSa2{r̍}[͍@jҵedB8&rD`QJFLr^2|ƃIsdJp{]&<w˳,ը|ݛpTF A-Dp4>h,C)avA70we@}4`(HL5 RQf1,򬃠; XK??XtF9wtT3IKYKta`bOukӿ='`O}_ջ%G^S-\k7LX$_M$?N7L]u1œfu#_L"z75.zyb* r*B+L4д.}ܿj8lQVc#mrop.f`*`ST Z߁blS&,5{ƍ, _vu 4=3`3$hi+-K$}EJ2%e:4ͣ{QC{NjZgLoV1~F34)\1[92hZ3r`6Z&W;rtȂ?~#KsA(I0+,L 0r)J; ~3}r AgXa-cPހ吤@iㅑJmc}J!ӢԙxMJ¼\{o'R}ǂB.$e͓k_Ah3Nqž|Bڼ'b94RE"Li9WN<0Oj2w^?ʍ%|? YP 0>>ˋYSEC»F1|&Zk[0P-uLg(OgXv=}6u=}^*$|bE ,\}&> B)!r;P\|-ؗ2@yMbH{-C9Fa I^tFEr7VL~kU^4}>2A" 97㭩jKk'_ J,ty /ܔt߽лF/Ed:vCS L~4?5w_r)qWJer}D\ B5JM+1ádsjx{՝p>7۹ U.|^2J Ļ3QQi.DDb5S(:% %6-Nuɺf7 8Yb_i[ZO8{-GRm?Ksն&2 ӶSz[9,Hzŝ\w-΄|h;Ϩk+t A{B&DE#iow).umsq㝠.k:0C{mBP>a#59/ګ")VK{#]Gu}VK:SR;@ށ[,k7J'9YJ}Y"S垱LQ]nXt< 瑺tiI▱(nZc`RfҹVpb#V>&w}'?V/ؽޗ|Yz}nQQ*yEXCϖkw@m7! r L8g̀V{tHbaJk וk-Su|%෇/j ې ǩ>߆Q6lEK,[}\>xwF֗{hyTc8b܆叽T(;6@Eun,|V"\^&Mefs})JBf~rвkm\(]iRʎsśK 3lYsB x*n$ G\Q+^xUU>=;_'_YB]-dA>La%/v=cڊou{uiOSɷFW \j-7L&>|4vxl H=rPQoqw?MaK|u9A{+_=zov͑V ?eUY_{`G{WW $T]5)U:ZH00y|uŗ X,XyEo~_~~]6G=lw#rLr2,^stEbp*> r5(T?3)8X6Q2[A-UWT w4 lղ˶>%n,P,\.tb\k'/‘T4iI`һCd(`KJ>$]LQg0vǁ! b <* s~ݕ8ҒPNbXXH6V]4JmϕSC̙wњP2zE鬂PG Vp).}Ԗx93Z{)¡Ե3)K]o^z]Ög֦rPE#K >OP fh/#hHHLBDN6xA^ʾ \H☁ 3Pe  D9.#ȁf1mPKOc=  #$$5pFBb%#1)DXҠu0I̥mXӲzJ.: .yaIO\ B Y\seQjs~3mbR9c6(%ϙARL!GDO>T4˞\ޫy7c`1$ha# z͕2`c`1GMB{'ZGt01vY4a#4Y@ ☋Jx->cUjNݪ:CڠKUNUj&5y,!R^X Y$#֔ڵeAOnT&]{U>vX]~Gp쵣SiGIao^LwP϶SH+~Rț_h9.n(; }an,5qmX_`fIڜ-1'Hk#zQWiŗl.!3#W { 5o 2VQT܉9ƱVW5m3Mw(7^?M~Ͷ+$T΂Rbr߂WvPiC V%EƚԽr䠿9"NǠt2Z'VC XH>L.DpCLD\ZYh6jhm9C ')%Fji>4ˆQ!ؖyj>ӓ!v@LQS, HTF1K %T;%"rb05=Bsh??<8"WsȎ";=38VpE]1/7跽}y"shMv4쭘$ > TPѷW# 8]@r 8M(rnS6 Ȣ`;X+\$JH!T'z=yeL#kb3/Fξc<ʽRyR$T>ân *XdS3veΌR2](PFfQG9b[1'߂8q+iڮoWGy+$Q>tI~X-G:A01r͍ .r,;"4)}|D+YJ2*rg'/£wdWݳBX sJ1G$6t 䩌AqbG{e8eC:&ffViǬQFYJy> zqyr4,3 gJ.ZWw@oqYSl饑n5~0\ljvM"T IL*XOLk>,%GQq&g6y As\ >LSM58q,D ,̰4N+1T' LsqyEl2ϋX٩QbZvV1$T0!;oD0V8vc_!*R>ga0H\J>0Ɵ 0I^kKqCuL?:]w ]*'˓ZoZE}fNX4V`@"aDWF J1uWf: wvj:6xw:[_  eqk=I6/W*A_ a^7C£0c,/ؤclf`%s(d,֡xzvwR"H8׳5y&\D&ã*U ﳕ/?YVUlb$ҏ8{g Fq8-!K#d+"@_fE-A; 7 pycLonR? cTQ' CFG$_>9̶}ǵ}GZ QNS`;,($M}<4WR_l\Fp46l0*ognp /?Ӈ?$_ff1e}0wvc{i` !Omdl`FVOX1|0<~O1uv!#lQLJyq :' ςc$"_qX__/ +aR/1",Dޭ&R/5eDDL ` Xy$RDXgj8?rywFg9s󅹝|Y`O;%J//sk`[-N'`(3*h2:Z-!E :H{Lh0Mng]/ fn(;Q%<G1`}ey7:IOiłϔo'Fܿ %W.xGF݁ڏ% yb/:w{dž=3͌owrsg`Sbc_]%ӪӬi!n͞`iz9pH 4m^"=aq1Z)~>7K^N3P8-j5DK|* ^b)N+c7'Ekh7eJ;yg2Lɼ3w&dޙ;CK2L?dޙ;yg2Lɼ3w&dޙ;yg2Lɼ3w&uw&dޙ;yg2Lo͌933ZdFh-2EfȐ0H%CmJYDu)SsiKZ.=euj9e LY%o3O&<'dL x2O&<'! IQX ǡAjuW>oqy k.$+,#4\~\)-:𔶼B-sSSZ&¥ a)$ N 38b97 #$N/ "R[ERcib }kn8zz]w`Z L!RQǣ!+1P &FVqSe3B,)#) 3eҁI Έ"F5Ȕ Id U̘6TAÌKXEL!mF0EA3X<G`ygC|vZ:ƃ^'XGE[(nq]O4c`\b҅VpbhTFC#Bqjob]jW IUp;oB!ghV{tHbaJsQW3_Ljx=(ڬsLPyR&a[Cgw&GH<)#Dd *Dam=f #CQiWN[ AHRVa S띱Vc&yeĀghj4BZ"C辤pO}y/<}Wbx:K]~GǓHy][kUZ_`8)˴ uHg{7rk)ey\ MCۋN~R byOd..(tΎH]}4͔ɱWn%_]xMG on_)isyӧۍyY`%0:hg)xdp[\2fKs~/)9D_aḯU_F-g?N\Ik}K=>RHFxucۯSb}gx7%p2Ri!JKw ?T(IPЂ .ޚB)g! ir/+T>N/&HNPO!y),t3R7ix6Ŷz&$?L$\:v7ҝ Oȃ[=/Zx*oOؾyUT;̊nP}'TӲXpgC3`c?8nh}RrvLJg3ۘ\*ųQĈ:> o,0}VV5'C~5923~VϑTϪ@-V`"~}ȉY)sr$?i]u~|t}_'tczgYt'8߫ϤLQb.ōM÷#*$`B9 9`Nn)fTFnSnga)w, Om ՍxP~+鑣ߊ( Ƌ>8˘䘄PDg|93=~wo -,^vGEwo> ’rroD57V/l3y|NyR %ZUR> rK%bdF/cL)lD֞rҵt-Mn&Uߏ]PlW/|uǓT(a |wT ?,xNBW1w+ٰy\(zMoK֩ S?oS2H_І?k&C4< 9͎br5'#w #+lH~ 0Y98~YCĘ"y|V_ I)D#q5տk.:w}nh&lՍ6;?) ¶]7eZLec_Æ1yB -EU(eCGYG=`)ҧay Į"7)D~^Ϫg]^G<pٟ&lףf M*QFJgQ^6u9y?~^?'A'O=hN:"u3G2M+U֐CWWH%'UW_u7Ke]gD(;K9i|4vNj3 :D ~"o:j=ЂU生9\r\OHOoܢ̧n|QR{gl >.2 yNJScqb>tXhMyoFʘ<yNI$X)G $#dOjʃQG4Ȑ ՂA-afF-D"%@ 5T:؏us`*^(܄%Qn{QY D赡YcRޱK+vwױ_8BII ł,/ J hL8'>NK9H-2P)L.$,j'Š}T1E0DauuU6TJ/ +LN |(N˲7K%e" p8<+n-ppO*9 ʀ+!b4Ke@mv_༐Z%_ՙ_$ _ܸ@L |g"j1@N xta-b %V{2MPyL(YJ@9;FZbX_r0 a /9 fJKӑq-FՍCdh{{Wgej]+08Q[Mue1wdR$SGbA._id)زK7#亇fs7ˀ*@F4Ai 6x 4:;E⊈1tx@n8>u=;9='AwX^zt]'((1EyZqML6sN,>ӊ6:I+/"۾oOE\.EM'+:tD˜`j9ب]H.Mo|s8w/G`B#v$_{>+#nײͅ2)Ay vܴ )sdm"Wq|(cQ"Wifsֿύr|}0οFT(yʘuq:6΁pJ5y0J(nDCrz$S;׻=OP ޜƞ)fjY';q_#F:4罯{1NtbS+q )3̡TuVhֺS5fF4Z=yS>v:}p OuYn8d(BA4׶`m/_!F…q$G4 < aE-.RG!3N>nߍٽN{9A7_rӨd18_W   F>w=Jl?33hܩ|S2{Y_=oo_?^R//~qͷ 90)Gn$AM¯{<>ahQ8|hauo=cjV+oNÛ;k8i~xv{1'_:x*fЕu?t+H6.E)?뢤Fɞ ш 6م˷q6mg gGHXw1[S %yvN2<_{Mc_%p "<hͅW}X?wRu: X/x)pBA K /1$.\Zȳ[G(_h!yo ipT!^NOvU߄ZVjXݟ߮R(?B]Y·L] ?D귑ɥ7M}57K}c/ ?4/&Ç^wl!PKP-KJ4[i ^Ri K5&TO&/%e ԇ!6/EKIqMR\YsrhO/t49U)ߋJz42+d*j+|4!po^:y]jN1f MER焣DVYN|5%x4"Q ]cF5^d+VZ̒ m`vAOhsoj\ąKQwm]KU 9^?-qU\ *?\|ǓNL׭@3,ї!)\D AdB̺,%p:ǔY4 bvrӲ/ò`L92b&L,jtm*H6;.:{A<,gL H6\ # BIs##KNA<8㴎ӞӞ5=S,\l o}c zR6M]:ѥR_:~yk+ȒW`U2w>)vULB캘Y1 cIEf71;e4$9;-%&,*vk>Y=O~(u'IRPrCWݻ+1m}`0zf;o+i iifa ǵՇLײ҇f<>lew}ؾ>lN0uŧcXZD=RmaDpRѹ 0*`aLkc+2Typ |Z\]>,5{M^\eқ;˗f>QMĬq17<ϴn @Z!emD$pZx[E#gZˣ[y_:eN>~hӚ]0=nsˏc2%)k6 ;D%: RGD0)3Xv #"VB+/K7+H ) cU7WV*oӘpX<ۢ0i]CeZt͚gTGPO 3TͳB__51fǘc˺:3cuLc+扚SIoP;E7Զas??y9'ْmC]ttbVcr$V\:ԫW<]i#~oJVt7{nF9S)N|u˵؟Ip9z94K6,V5H]܌wv(2ɬQe-L;B[ʺL tҡ/E_o*|Q@yQ(j8<8i7({2a)ZyKjW<\ye{]~]䓥/GO?-D@&*˴ yY)|9JNaI[o`5:sVePd/#rrDC4y[dcӹLꬴy!W:n.>5v-M̫@-X `\{ҨAfA:ETɰ Vgc!r;mygxyA KvgIK(v) =7qBH\ h O-X1VfL1 #+&TIiEbvlw-YJ,}ͥ}uy: :$!ΊE A0 sAV1T ),+J!? Y(X;km"Cц6Flm:cDՒ>$h-- Q5gklֈ`*Vvts")Լ >:'Khby#մ|c&FrξY΄+I"Y)fPLÆb_4'b?_- ѥ!-gggCx"̍fh5/pAMTQ3&'JBgƒhED3<Q*+9qIœ [oפ1 E:qץIɒ_ |X.*0$ 3{]L}āXO;3έdbNظ#n()&8㤈.ѩRrxh_.`$sBYt)v}݃b;ŃW+Jw ̇ͮM//nFX~yWjr6;68[3 ͞?<蟞MWV("Bt5*= 001jH֏t4hqۤQe& =ޟlz8GllYIfFsHXb˭$rxď]Lw;HrS馚h7{yF߽|ɿ~结o…=y'}KZ/K)GOGMGL ƛ--l3mg ŸV e%oDEmB $OW|;Xbծ+zݟt3*U{8 M|Z?͋׳ꬤ7ϱ!ė&e7Zp궍FZKHZlLU98+oi9}m ~Ҵ1 A~^%@"ȜBfT8Fgȏ~;s:|T2Ōqe'*06Fd9@(퍅&5;2Us7ѼLbt4mpN;(%uqQ Iۄje)!ۨi&#lkvgwy e!HB~aVc>JQV+]`.j?VFi)yؔҠH,T (CHɰ*Z+:^uA/!r DyAQ]㯢<_AσgAvjOjaϒ<f7pG/OdfU=te'D .@3(= a1J`K̉L &S1GYbyUԖ8Iw m<7{!*΅tv2Vh~+59DܦnQɑL > beRtI2aiT3"SQA 8rpf2ļq'*69.2J.S dT92269x-z|<}Źg_]|MQۘβܓe錚}Ynxpt4]] *tU]Bͼ.]ӧ]%ť dIDe0XNqLECPt&!V-A] ɲCO 52#ut'˸G,cJ2#-g`B|BmM~ 9&!<-+u{aLRY*ǐJeKüd{PsB,;E3Rtr8BL!8%01A:gk!598 чqcA=:u B]+-)N%_JQpKi SLXHJA\rDBNY~1_jyLx3a\@G/Y̐%LNeD Z:vw/?Be|5J;\_~qUig;9},o8M/ꙑ}@3+B$k>t7[{^z_}3sс:%.\ ײέ٥μktXo桢t&ēp-f^~Mozvi8`g{ij4Okzӝi^]]U~G=5iTK_~)0CV^ջ#Ɯdl+Oz`C Y^B| 3x Y=&v v v-hDOAZe$c&eR^ NL:V*#CM6eW2R"EZ#m|Yͦ]1WcZv"Bv2氻JK`gm-@ ܓARAgX彦)@ 9r)~ԧ-!Xy8JUAWeDyLzi|.[}zMQ^-(;…!`"Ι+-8ƵY#3vLl94:4ӟG#0rZ:(ɟeޞtRίtx¨GFv'tZBp*kX2D2C.@;cºTIq3G}to! ,)$j:LJ>%]Jˀч8@"3Dg2'D]W]=~.I.އ>Zk$$ _%4zHHigHWE hFs+%"s/9-zT%&_ [T T_a+qh:_-5Wpq,ؽ'K>S6{pGY٧m)&r'iӑs/$ G52ோ mBZOa:=.)ԷgZв}Q;[h͟vcn!;W~Fƨw[|mOmms_b(Xwk~m ϭs^W09on|2Ai ̴sY"<ċiF Mh9 H\e߷!΋94P.ga ܛZNz`;,|>n?eirڰ'yERIq ?zHz!֫ aScaWf '& Q3'Lz`FʩR KNF~ wX2P%u'uhu.'R/RU̪fB}qB))&SNq飶 ,3ANZJʖSn&z2;:uu͓_5(`0?(ep f9JRiRyXhT sxwUJ+y2 )$&%2+Ge<c(O6XI5y{cY8*;?̮AR\{5T/Eu4 }[OK8vLH1.ꮃ $G H J-0AZaLvEeYeŷ>Hquػ| mla2r +4:gϒ$1egΒ4rL_C'5韹rX:(v|ˆ^FcD2Y+냉KM-a$EV 1lϸgз[i vӖ?oolNI]bM\; "WgN(Y(vV2QamWL>dRsOXgOL}nsN\[5NlbsSHA I!LziSoХDݧ8]ЗF!@eUӣkKyI7м'6 ~Z=z rSnB"K9ڤƸ Lyxw3fD&jJ52$d-DF]f"L u-Z/}wdHtpw2x~uDMRb;6GW._0 \" bi V8'mJ8ȷi2 uh` Ƨthb M9Ą8WZNДm>Z*%/׋a0ʆ:|/&NjajD-$*鼘Wp[67SS.~Y׬\ԕ7З IXwCti&&Yl`qfro0+G4D0WW]N*}jvj5iNZ.'CJeMї0* VeIG{as00J3 gF7q\DeeQ+LQUaסKڼM?c.2F]-/T3 W \9f*Xk">f"vWbm;s *r5M"WkhZ/rpΣyĒyE*̯'A"z0Z.]]%*&SWH] Mߕ.=ML7 K``*ozL\߽96A]c;t<_xbah4^LB%Td-_o@k7ooT"}o73=ǰ( YgxQ0h Ef DǜbACTZ85#@gS͵4B8B;(eb!eF4y[W_V3(3VeZܹ_lZM^^:vJ%Qً$ϽhrAeNF],fLၱΰRc6(S̙ARL&Kq\C^ԺGb_ )$rI lC<QSŞ  j̚2`k b+t@Hm` vVk/B@AE%Kf~e҆ϛr5t)fj2;zkyϞw?l9|v)1^@biiirC FQݿ9_W#yn{VЮMW;u%4i'ɅڄWm6ٰ@ ~PFrЧor`ƞV*NW^'ָR}T|oK՗z YmB$L rH# be}0z)#"b1h#2&"ئꪌ-in1ڥy"/ % LCurMWrێVWĵg=DoWVߝdnN0;<`y֝n3 V',P'xy0mag^}D=ɤ}@h Df9t\.A1aiKrk!NmbŎ-NHJ*iu: 3Iu4)pPjsR;4Nt#3xElͬt5OvLPj?{/n v[NL+6QӲdq²'N^0։C_z64nwQ@yQ(j8Nx3N2F_\VQ髏&7XL ƋWUVw|?xeػY L񜙜v\o>>/Um:{,rm:޿ۣc #;cFtjy*~joEjuq5O!|;j F\Gpz[.h& "yp^{jZ"D*v_{y]q.Z'xx"rfD)cVr6ȹMWudXDcZ)"V_"D4Ưُ_AC+r^C#F=-Rҷ|Bz=JC]=/ؔ&uu1 y B7eΛ(?-&f39Ƥ ~ta)1H)h;%;oR1"bcJ[.=E刌:G ;hyQa!€ա2籟r^nrY6n6%yv  LQS, XT d8$jDDYl,[K]|ST%(l|;ZO bWls&+J7؈Z"ɓ>&܏_s$t!#ݨ 0 BH8ݥM]xR[o 8 ScF bF$EeHd$LΓVzJ.ucoyrM3|}98q<@굴4h DG*D" `Sx91²4*kc&VQAsglSlI`N*:d2Kp6rKFx2峫nwl,wgy.R+a"Xfj$aQB&*iW|5V3T ʲp!A0]8ذwMA}D^tt,ٻ6$W] #Ku`\ga _%iR!)aV9EiHJ,Q3隞ꧪa]8?ЃiPoC,~P*I^K5;晢L*+ \ :v(v."B3P)c:s€;m<8t\D"(.(CS0$@~gloYpj"/ƾ9ݔ2XŻDvZnOuY̳EؤOqljj p S $O[!"5:_x "%4s7C$.4 IhEBh1t\d P g\pz@ٍkXޔ`$n1EyZqML4&,Xy_+%2w3'C_FcMM( \2XOVu&*"T)F%1%)lQn![;{73!64?c:#w5b̅1)Aכ+ ̂-L嵑9KB?hKzN󠽳c(!_^(̤ٿW.͙:ؕPC-S%'N$jtr40J(NDGTI)Gv2h2E3(L5?֒D 8przB"kH''JǻUz7\bɜ$8f)zǑ 1loK e֦dKسo_>~:A\qbЂȼ~7l~׃;ѫ/p8)$OT=n@}xսyUuμO|{1=xFs=f?ٯaOf=u.p okqii %w @k3C=aQms/|4\6=n?]pVlkY̳V'y]668|6| Z xF[V,T1po??w?{~{wO('ooq4"*Z_@-MM]>7hW.#ھYkC溜wMz:_ϙ3zz ji6ԟT[y=Pa0-.}^O >J&_):E)e̖T5JGI~;M1l%0f`;ma C:$xbD Zsai ֏(:JJ<8BCP=p"0TLPz"P%Z3ĵe6VS^VdlKN- ,H=)q#ur)DB-ME ?ygLD,^bjbo0?ao kq6vrm˳z۫ ?G_-%~WcüӇEqeWj^Ԛ +І R阌SZP5KpɃ0i(O;T"E*Rp!`]:.y`ցtUp0:z.|BIDL` ܙ!ND$4BhS(8kK:j{㩫:ywW7 қw"`/PoOPOjnvPbg8tQ+8/dvw\PM*Hxj\ &+*e,EM⨴ xta72gus9@Wnk2mR$v]9C"e`T(MnpR9&;^aL 3f9NǗu0%eF7lǹrzqڤyk-eXokȌg59j8fz> o~[t-+EX/W6<>eyWW;$ތw7oKۮ[w WVI~1f)uAf_]syaBͻs zElttvMYλ_J870cFYڂV|=(c;mvPO̤:\vG( IXa8w8<>i@Vvo4h۰ڶJ$?=>"$U֣HwQh*4p>Ӂl,#cfcЂ#*HdrA'aidF79.]DSJ"i͟BF1.xwҡJ;'v&zmUOP!llXtcЄX$eБxbUr0ϭ j,шDNrҢcc8+yڋ[Q ;U1ểA%pAmJ5>Iegc 8] Ȋi+U:<ą'yWԻ.Z ndqNLZta1h!-$9!*&@řc@C P2b IB@"3⩣Fk3R#J*tXǡY!Uy j#|j#|J+OϞ\\gk5` ayqܙ''ϯoC~sv~ o^͖5%o^G]C?OV G_^qN\b6cR.8E%d-`gvC+S msAC6B\h msm~PO* 6B\h msm.ͅ6B\h msm.{ ,? B\ Usj.TͅP5B\h9֪6yȤY5?&mgc aJEη[%(ڈn7UzlB[x-4bB] iDXk+tXhZѴB] v!~05!5dY>P tɢ2V Em+hl=VmصYأfa@PX#*RRē1'í&b`9nKGEwROmZlwI4 '-uuڕ8ԭg+Gv jԔڭѾ~RُZ%|ݾjY;9tԊ2:b@/ν6bd;G{KKv-vj ݐl<Mv̅=⌀3?nXqZ} ucS&s*FKRň16TqUY[OTwOS*02An>`0>jO  d[=9~paO1Yy9]{"HΌ7t傳m8BhR7Cto ~P F ֺ5vW}yXxzP7?ni,|ss4߱l}m頍6k{u3&l;欔l%814*-LY.IA_E_kou@yV(jRVvN:7S5+Hw VT^]fS)SKRlS㍗r|]1 J+@#W9l%ޕ$BeSRއc=kc{mCSbE-/odU")I*>*VFeEdDFDMo95kWr.*Gd4a=bNF$|xqη:qo`=a)nw7r+_  Al35X1KYe$,FYlFBo$J6E v23Z?dj.T?o[@R6ł`q/ x1F:fib D*^L} EKX&8!>=ND/Hk$#g9+cVF1"JqTQܦudXDcZ)"V"D46#g=F -hcB[ӆӊ\oZB3g"Th1,NWP+1TNH&GAc`(`ܫ0)ثk&ºZwW1$T0!X;wD q"cq6;? c:gn;7P%,̾6ߡŋqfiUF&\q$!vn`z1|'<{Qc:C]AoYxNy{ 3¯Eʻ«<8{͗ WR}7s"`qqk3Mm/ ^2jFAG CavyVb`X(4ɒDwcvvTV:}ɺQޕ!Jl:c€JWfB*}āh҂< J)(3LJ'\t6Bp'HK2p6QH G^F'Sĕg}&ved|]51"J wNZZULlpZ k"sj)k,,kTS6#fc&VQAsgтNR1$xB@PB2 Kpc䬗q7 ǣpUgNV1Wgy(RҹAeb&R/ G>@bX/<+k-/~Oޙ$TtaG, iBgPd#W=lE-C|NQ6䢠  PFPYaHș/p^._+ >(A2ELCZݺ[/?IM_avsXJݽ׋OGŷ_NՕ{`:tW;7ex2}t^SIY)圧Y;pcv8HMyvSv8h DD' Jjr +JF(%TQW\MNE] cWWJ[uW߳en.ew^ys)7_n@@2my3)? T!ɸ+6 f,4 A% 簘^UUF8{+i:9_{b/;cq!UXLzEGz_|ѰY-gp8SvT=QA6Bn%%yKUXPnL}j=Jf %X,tAƕ-;87JI41F13ɬ`  [nkcv7N6S%qIGS@RGQJmƌ|Rw!#}12[[){3m™l,(G7::yc" -1Fq6~ ZPUƝVe[V3V-z2Ke8ݝIzëR 1aR tT@9ɶ$zY+zlDFP}3T8e^׮bQf淸WPTl1GxA晋M DPuIrnf5Tݵgk̈́zbJK6Qyd{v@Ss\ X<Ѽ,T3։&IؽXmx69  oCb@.4 +1RFkڢE&Js1X "#Q) J-ܙ909%&jo<|[9At狪.y )~ujTԎpwE r?l!rFBBe \f)1H_ܷ=*X#ҖKO$wpr uXEv:m: _1^y-|kN ?o⤇=a)PM?|wsX*'uaĖ)|Cj'pA^ >NƽH>7tQ4\̼1cN 83d*idJ]֠mk#m # !>=ND/Hk8."GhM#m1 +H.-<5NQmGsN>ԑaYqGkZ شK9]ZقF)yLE![غ#>GAѹ:8xѥA<"Z0do0]Y8e]).8T;נ$rSi>/Q w6cZ3Aru(+}m ޿ICl=H}Y?[eRہ7[1Noe[d9n5Jon6[ H=)wʞib\5+7:ac4eLK\&@,;t)=nI=I]iao]@'yrG}f3٪ =sRSt8"A3#;<1(.y "G{eD`$\l2lBa*5( t>nX/5^##gf|~Dn{_r( Lkr5nueC$GOje1 hwM@pQj)U"}4XJ1P2Ll8p$x>\offA,#Th1,:( J3KCAyQXy,*02 +;n$!h`̥(T0!X;wD?{Ƒ_!%ݏ*u N@ōLD9~CR$őHc693쮮5vӬqk8(a1jmk˫75Z2huk 磯Z^eWGG~Xz_[w?wko|lvr'hAQ9;bVK"&8y X3⬜zW~4j4kY&̆)ƭb V,֝|<:8Gd1j\;g/rݬ Bkq=9ԉ Ǘaҫ )KB;Rj7 j&~6Mo{7߽oޞJw_%c=y3 'w߾6]MͧL]z-E{}^֠띵e ᏯG$pD/)NG~i4$r8-T &>"Yx/F|J8ectS"&r"&uL5RRԷZ<bڼ.O@P?73$Rd@&:R]묿㋿1]mI$H942d0F[ R4\bBQ,3swɽ#sܴ!zHSB ؠ ,1`l8d;[4$Hd r:9:*4hm]>S͑]@W} ~b@EC+f b8/aQrFj@Ԑ3DDz-)ODz];%:6{}wp|'*~~dlJ' gnfVzuywo@AP*, 5`=t1""+S}Vw}Y^R*)j5D+a9X" MZ-2'H=ݫINwm1ʴU7?ݩ* }Lpu6MuDЎԠN|V*HVh6TH6I@$mN9zh bl&)->TL&zTy)\b=3Uws?Գ_Weݹ対(17e'˒ܲܘɽ UyoB> #ؐ7!d7w0s *^Q 뎒u$M(Yd1(mؼ6Z9Y]h6C:u㲿Ah˓ֺ|{јjejަON ?%ZURo'"bb66mHfQ,i eT&C͊-7Hk8G@O?~|r|5}41 γߘFF0'^('a,(ZQ5_QѢh,`d riW`7-~:~1s vvW-4P+av7qP _!'Q;#ZAC LgO PKϝǔ.RQt-(k m$yЙLF`5neccsMq?Pyg}(U m:qn3w ʪz 8U=4_|p>2g]-wC$(D݀ >0X/&`˻?%_[:n&nNl2Y5WzU=> GRxZ*WprCo\DѷT [*PRGxy9iN/5S,5h@`y~:0Lɀ&K]5j2*DV]x{N=tIHgs&&o(l <]4PtB !DtI} gP Z m{ҍ< ZU-Xfܸ(%Xfh[.[{L^]UF-}B2,"oZ)A#C*t‡tYcqkJZK}6_t1|`R˩) Fvܞ!K}8/5)E UR(t"7/A֗ ;d1B&kFX:>MY6FY ]s("x uHч@ KUDJ%I_'낲wΖSʏx?lخ8 06b<գ޼ŀׇkR?N{oe;f=ʟm =ۅz|qՎ|aQ:5A6,'nvf=^?sOa،v@~'n?&yώmyPo~wpmhgӟ3֢#{MV!4˟?ma3Oe4pA}(7mͷ%˭w]u]KUG} ]\u`!E+m@7tۍ$sQ2&{o/0;.Bnh, rMVhXy|}2W pv! vXH#,bw8_XʣyEXcxɇӫ8yF>$ѺH1K[Bb*_!'SbH AZfB1?Y?ShD60 ;~. ^vB G.w;Y*=S|Q]%BDȜ *(.r[M͞3eD%,R AԘtEOrKJY.'+Tג3p('QKً,2* G% oZ]}̗A>8@?}χO_$v'I Ԁ&6|mҪJ4:`!9ERQ@ȲSDt؅MvrX/Nc6X &S13,UYa%0R1ID ygcRUl@K*IR)PStȊkk<@紞zN{LN;j{0(,eSTDpKZ Ea;O hJcIڟo@jRgk`;6.s]~SiD ?C>y19gUkXk(QyӺ}Clx97BLh#/E[#k HFR Ӄj޶EONoZ G YP '`O:jlK,&F#'0a;pb cKۺJo|ד%Gu-W[~1X:Y|E=]bT[RWD+t !bdBiG}V Dw T(~?_\#aDDӿd;~ŏӍ:!_y3J"YyOemݮ@ΰޮ| @iH#wǿ^ԙv{iҭw<*.ac"Q)`R D !qwlZӉ"Ȍ,䙛M$G H( awRQQUX -C7*Z(sm d. /l(AȤXl֦;)*Ȩ. -?Jlbd-5S[b{V? sT[ Mr;r[ P(T0Q`02B"AFVԔVcjfi$""qmܠ<~F(d4Â,i=i9Qa#؎*7vC H/g1;J23Ll:HD^YE\m ;_F%*|>w)Oz>Oz>O}>Oz{Y'5ԏ5gpUދ:$cn6R`>)>)O9>Oӯ~,S`Kjf65כigHUǠ6!9%H1tΖQkwUSS) ( GbKA idK9))Gc)lK]N /FؘYt"棔.P9"@,|I*tB*m'deÂ|F38cs1,t]T5B*j@FgJiT Vv^d3vekaв,j#BAp8s0wyO_U▱W !K'iY̙C:<RRAhh4T&YZz@xS:S&&VQR1Z 1b"iZ+I:ǬYLMBi]0]_hyn?Y*{P~CqP, \>rܣ՗MWs! DXqۥRzƻ2Hfx5(I K!,Q!3=7F7epY*Oj XoaB) ]މoåڜ\$F3R.#mR\G(mO5[YEyΫRG9sk}#7CIS>fO}띙$N̽,}4wWWI!5aC!fL?v\>z /(5 Yݺ]X#X`T9F$㑅H>HԔ1тFQ4`pHn\: o&\P5(&M yŴyUb~lz* HQ_c9]M+7G)7XHǞthp.Ŝ(aHr |+ 9|?7Rw5Y IzR4`ͤz5瓒V.G. J l gJh\XZxaX{Dˋ7^MdJ"j+4lا loNdCֶ dj xP|ZRtˋYn2kW,9|oq?,غܰ=ta6jgIUmA3h7ϘRF%T[%/=M7E.7?7]W?a'A h7? nRwF-Wv;r@i\ D4Jp!gQ^ gz o.7_UE _$(>ЪbVx}<)y,_M٬B7Tn6o (_TO~**LӬ,L {R^ٻJWnhҌj/5gg27:A/כvxBkڴ_\YR Js~5MbR9c6([8Si[g@Μ@r,1 v79(0?{MÓ:Rjsfcy&zG-2J.2W'_y۽xVkk `,rtã$;ghcc"!-)E /|?l`+D(\!r -%& 4-jP CRDl\i˥'~pDFCӝ1t JpF|HaJO &2My<4Yۗ͟4u+A5-Sy U@mR-#q?"\sq%i-%Nov!=h{kY;r *\ԄZ$Fab5k (h Ը 368x|Bs_{7ȵfב *@4TD 3âj%1 4 I~Q< {z<ZX7ڼ^ nQ40攤31*v^+`B-G=rKvI ٨%b̘]4a )$@012Ib`qpkGh.'7/?+4~yF`8> pf:]_[$K 8x J@dUw%q "EaFٿ?o]&OS y="Oj0{`nE+4aPlBUx♕G)i(}橢]>ˊYVKN W+0`ou|+,\mǗ+S6ZA zKy$"kY"k5aI_Fr0J0NWPeD~1-:"#85:7<+J7˯D=>:g€JWB*}āq{=Ay'( L1&DyGa*P8A48碣D;@=<'K*Ljd<)KM-a$EV 1rayh/$XWm}dWf -Vv>7wFW6WrurҠb&ebQp&RZxN 6q0baeJpFpČAc,6**h2:ZpR.CJ't 5 T!Ӱ7Fg}r<DZn_ud=&4_S>,Y2A:k̵\@.CTJ|r%*)izri(mr2eqPVs&oF D\ȵw6gZsAEkj#^xҶx"E_GQ 8nUGCJ9{̝vb%{gq M ÌМ[oUsq30caran.KwsǸ|$R|}᱗}WnN>O=;>"<@ynyQ|ƌyގ`EyގJBy[8ϸu1'T&"h茯/(VlzpAȮ˫OIQ-_^fR JpVgnS}:ﲦt|.&2Z^w,`/Qfy)fh-10Rf5]1-$q!g@t͐Zes5:`"XAT+PѣD%nkDLGuB*NNa#WW@->D%zc" 9)ɀb*CyUR"wʋ]ie #,C DD'w_{wT9ߐ /: ƕTαݸ-잍i|kCY v/e~/4Xiey)>/ 4)_%`0HQ*/:W2:7j}hbzsEiJb0̰[M6_e`TډB29*AO~ۍKh_/9}<y+F6W|HOӦ@ 4(pȲ#'q;\=lZI(K:g ؖ9˝o Rh&+"`) -_Q^''+5RЀwq>aC0;Xm=%ONi+lJ#r2cJt}B~>Vif^t|^^o$ƃv w7JikWVt%|vuj 1J.SJ&@:י'`<EC9:ɱ\8S7 C5O{3ć#)1Y92ƐKY8yIlPsC, O_KKH `0aA@hm0#hy'`$IJ\oA\leB}^ؤ+E!a> &A+s`-G,ĬKHGiS'ocR+tq$,tDQ\LD鍗,"Ty)JĠ#ly/cG$c2Rr9!J1x'SA t`_5r\;ϡVnV|X @]}kqdV|};?LuSݸTY33޾ۻ?ٵ\Kڤ$j[_K?9A RZ_Z7뻧M6th>J;vH-j̸unMo|vBg\%Tw{ކCn1W7t<Xsnᓻ^c۟7vnM=%Fn]_1=*&NC[ %lϾOS[1 L_Y#nloW,;Qػ˾iAd07׷%0&2&exPr%N!9*ۭzR(j֒4ch![(!$W]uk:Z#`J+촵 %zr,uZMBD$jMNk9鑔TmX-E)х8㭺t"}X^&izq˸aɟ6hpxpYc@ D{EHUTAp a7dLHǙ'D,*u\?!)ڔ4  ǬC.ӄEckuEfnFsy,ZwIk/ל]{[CU`r^ q 7lH Lb"Q#AP5bT/'ees3 #HP.EB śYs#ɩ6 0!c7K>%%lBQqI,خu{VJڬ5.bɊ.9#UƑk0Մl}4(*Eii2Μ#-sȎ,&f71;e4+LcF'e'r,* 3ޓydue=G2!zsI3KDhur*p+ ^395Uj>ήd^}x'l]_~]ƱGGBGZ9;2ڢ`T>,:ˌɧҩx_woϧ|:l$թ /<1K륱ցaΈ*: gܞLBTRtP,TNjkpÛO''/mk_[XŲR8ۨ\)&˜$Z ~G{1gzX:Yy\^{8ؐj/’wW乐Mʟ>rgpI> g_2i FBFmH٣LIpZx[MLGfG}Nzɜ}бUj`ZA`n}۟],4 k{eSٓuw+/Ƿ'.妗n)Oe!ȣ.#"QB+/HJ7+)J s\tgs+OӄpX"e#y-j9b7E%cl575q?O`ƤB2{p̳%C.,XPP-Dˆ^5N1zI a3YqEg^vAfy{-wLZYto}ROOu'SsO3:ydA.~;3O7$_n]Þt{&ivb}~ccYEg&m'^ɔMf?y-ke̻x5ԅJOu!:[l6]yҖ EM oI ǝst0YBc9 NX1FAe/#QEZxLv"7o&m-8|uaæS< [A9dh, 5Kbͧᅣ [3*2m %)|9^eROPs5س}ZYI¨6 ^Fn11avD!O6q1 |#8.琘G67 J{cw~51#+!X&=מ,fdI AU6,"nc!r{"3vL݂t\#PQwڈ"Ch$ƈٌIC7KafΤ);yisyY~UngyΎ:;j=<Ts+H@nm3ͷ+L3DLp&4ؘH:7bR7{}{brϹܗG39ɴ+';f5Ꮍ %sK=eQ9hDn%i0[eT &XL B!g.KRL EzfPD!192JFΖ5|~Dy_wQ(ĆLk DZ+uECU7gM]G,`ݓ]tA2å$SdscC-1K3sP8T& sIh'N4qx?! ,r/ԉT΂hXa,ZZM0#mu^O b s<{p8άt3$PD6BʏS;̣4{ʑ@k=4r#aʣVG Y,f4|Oqn)9`Gdר] LJuR3\Bo7X³fBNN5N;Ǐ~wߖ?~wǏ\؏}CV4w9Z/$IN/^1547ZZfh[gŸVԼq_uhqaY[f:IJWa~h])N6A]E]Y&1V2L⬋=J:{:*x [V^s Sc%^wp9f+`A<<Ӣ +s0h^NǮ?!AzmX̩Ĭ*'77!v88Q1IƕSJj90<$ёwsӊ!9F0tͅH mM(!YW:a59PUu5xj)ozRT&jDBĊBXJ :RT?mgx%MU\?ou1g S>^YċiF+M(q7y '`R"X!Ga‘KMXNdZ'֚K0] &2S.) N];ooYTb&Ӻ}xv;WSxf>%;.lq2wm)mX{k+18cZZU pi`5H`r8 1²b%*  ,ZlX4wFąbHQB (Utdll8h8U{'_`MlOQ+ϑx g,7绺۸Ņ܇ +hC.JZ;'ȶp ~hrvĀ)=N; @\DK(F[5`Д0 .vZ}MZL+:&%LLҳ.%=qfh]4x] dB aBZ`-,½e6]j62{3ȳay7mp1+[mZk\ͥӗ505_wQL_n g&w1?WOʤl= F3QS~:UHbT8 _+pW81W8/pW8 _+pW8 _+N]+pWC7[i,*$S+pW J!Xa+ Xa+bY0f,V Xa+bų|gIa+.3آp]N᮸/% +KV7# q 'Ta{b]{d*IY#!ϩ+zݦJPA%)?xLkesצF'׷Ky2 ߰ܕ=h$Id&#QHYu2LwZ D佖豉h wLTs?ЃO r y۳,H4G"酈"Z"78]LA+&:Q;7L,O*Ű H̾r6q6hJĮ0JhCWNFd'/<1rxqy[]_j>;IMW Ze{oCOisa8N-}{wVysSLt)h~J GfPpWs-FZ@HFEJFHv | .I%ހȌ ^`A 1M#2fge\Rb ͌B]lI™D&wi|H:laT4wl~a"'8ZAN 0U,Uj$`:wg΁(eK+K}HIJ٤yI=t$L T-v6q6[l;9+V;x^ gںX`xNJ Xx#0Hj D)2Dbȅ4*xȀ 1/ HQNp Z #:H;38a'zQ?Xlb_E̒0BB2p[g($^2 &O%ЁKFbu,Ǯ`IoM+yM;i ۃRji|{OMm*qP]O@a|HvD2J \]Na+t,ZB@R'.[f68uZG:Gy8L K>xgSILj2HB +L{E !IA #52(tq`8-RLp7m.q=${h%X nC < GgClT"CUBCB@]h". ES)3eҁI šEJY s$*/)GA[v=_3lz sZTD)Z1le\\pw9T E9n/6w~ \F`#!"E%pE_ iy?9)p0 ZGK!zƌ\hg>NLj9+j$qٱ ʹex9(Z&-|f?.m ~o~O' WW RF2[ I.c[X`, I3ȧ7gCY`KM.>ÕCчK?a7ܻ߾$QI\-9_>}MZ+&I^Ԓ DA-J:0Ԃ83{gw.dwFpFEp!,_2 m.m ٰ_ۆ=>Esse/;_j;鋚5>3 ƍ/}߁Oԋ`zihm~0D .z-#ddމQ[EdU DBz#,UGZ9ҁʘJN'&2,5;u*Yc{#%<8ϢB| %|! Lvu ousBm뼗$m}שojĀ]L.uuմ^= `YA]?uвCwnx[geZ2w<>6s~+ l;q[X鱔ݞwzMljNt,Bj!ènupcV*~0Zbsx(hAVwZW[UYoMV3..{$'uzL4w]]A|R V.B.Y @_zc_r w"`oռ_^W~_'M83> .h4ɐva~] {KNn{L]T9[VM'Kp3",DX,-,xGI4 GrMZhXySQF ^LpTH#SZŔ BFG+dxe, h."Ghem1 +Hޝz7XjB:SӚqF^B^R@Ӻ߃11"=y3qܪqs'ɭzw+<#ܪzV +X ~2*驘$-]7WIJZ[4WDq +}2*Щ$*I9z;*)>!s+O\%qO\%i9JR Z4WsEWQMud 㾆qvx{w]狧A~48`5SYG Q'`n[̕3h򉛂v@`$.>$\6IɊ}Ɩ?>]PER/S5u:QCOo1b'\>EvNY䆰^ mnE/$~I{3νsr: c5kM/z"Kֶl:Mlr@~}_l~+1; Ȇq,ҿVpFͻo?}Q_ܼOd2{whϿ;Vry)cRf>㗳;`P/G]UrŨJc_]U*mDue5xC]W/Qwb~LN_?sn:Q~¢KX^%7Jz}])9p0 sZOɯOWW@hCr>'M)ghL 䤊`)yb,/ L0ZUqj/ypY]~WQ 4Lla7^*8y  sa[+kK)|9 0J,ZBK` QD*͏(g vquƫ?Mn \ήN xK/Y<>c<>c<>cIFֱ}{y}{y}{1^!Rƪ<2k3 6)(FQ zrjlbKl6DŽbJ6{} >sW[g:t2]q?|>zlm0Q %gXe)lRdԠ(9E-tXLC +VB'fCYj&@HC!/;Fɝ[K>? Я{{U),M}m~l^>Er5uΛj]ʤ⥊JXPˑPkKk(FB"Ʉ8tf"s Ix5^4:2了fV,2*d4:ix0`q-C,eȬ1\6lNX4ԃ 2T=^Ng谄[Xw 򆹛:,ʈYe%oCw!I"~ks'z^SO5h cV P|*A#Dt c  @t,HZ`=;gKj}[{7WkO)O gϥ~H=@`χ#;[=W'>kVguŐ>ԛ8LO|za|u鲼z K yS\3'lZg86{JHѩ8zoRQNI]MgO7k9 \B"&&a]\\K nuQky( c{!_ NCJ+ϯ_^(vZWѴm'\~ۃj;^q[H>{w +O{0] ߝώYgcٓi8.*k"f/糶gڛ0^-(E!6Ăq&ԷtiyčӬ[CIx*֝|2],zz9d84lUg]>Y7=+ ji f䥄ԉ5?&g}>eU# `0G;nh:kP<|x~y̎~xO?Կ~wH߼{wW`SL7 ڋ {pcruWS|˩]v]>bw.yü7GKUHYY)Oߍ4ճjo&'fA=sk8(i@˯0,'שRf-UZA< l Pqy PwHW| >c I)Ve<.w?@6ƾ.%gtv Jk)lt֟q;Sߥ|F@V9RCtZ+cGJxr34,ޑ',sӊ_!yli<0eP%FGf.&fhe63sGmBr>yІt0ٜ T'Aiɾ:F:+[D݈.ϡ If:SV >eu끫[0`[) *xR ƗSK)xR=!Jk'_`'ǫ@~;0MW&¾ڠAdNGccSd *Ad3K Qڧzwe}ݕiH糟~` 32C A9F6PE?=HQT!#X6oBtqyZ_6=ke|D`4ImЏy<}d N5Enok~v/!a2Ҧns.mCY%{m}V[c:vxG$cEյ)=8nޟ) Ox *݁ju@^`O4;@(AH() 9%BRI Dct2%VY$ʨT.a^0LPE ) eNep$L."|Q1:vCjzt{l炈>l÷Dmw =k%2\3+ۨC,2PlI*fSMIc ة׀RaC,x=>@ȬU\:|lՂ䚳#%XJc |+ v(X5I`S6vEoY>e)\fn5a)4ǟô(r)&FM%%$z1LHf8deB>pń| -ݛY[,%SrIk / eiTZ28ٹvYN[SJO\rQ0#'`ZJSĪřx6v3U:.@ΕrҦR^{k.m8$}B=n|D=5~a0v6ثx*:8.#ŀ6|Q5 B4A c7mHjdUtz8P+U6@Qr"AŤsB٢ɉdmRHou|) $b72ze}DD }uvqa& ];>"g;\NFtq5~^$uimŘJk,'Ab "Y\TM>Yu7!H/)#),aiD3-x(e)PK%$,cQJz BY+wںX{nT3NڶHL*4 ˴ rJաR_ 0)tU+哏EȤ JEi9U)ed,kW(e_{ Ze ݁ rV':6_|U8.X0yHTA&ȋǸ->Of>U_|t=|!BijO̠&&} c/%dMv"AAZ⩓O׵!|`t7S-em3>&WO7< /͞d)m9GͼAtD, o4KF4:H-cfף#~#c܌N%އljrR:Q' )B qaYKEC/h彳P{gF ͕<.1iQMf@f\~(e[a3A=ur>˯4uO1iSXRDM>`8O4?E':+%+DWQPP7?K՞jP*+Fԩr6H'\ts" "o 0Pb*xt댜cs9ggm 1񰉌ۘv^&[}ԫwpU$qC tOJPm;fu--C/v݇1ؿk;y'iӚPCQ?R1fo,׻w9ntuϖ&xj ,uaZ#_޼}EF^n~[^@|]x-ݎ>\0u˒l=1V|孻}XzS%O}ڡf3ضyF1wLu`GЈJ2 zTZdHHޛ"eLq hnku=Dk6~Hc9.^<_|2lXEDw$Vr+I>jt Ӳ0 )F,);ciQX2 QL.NuBQ|ʺ8?k?QuJs&2 :vvFvړ_NC <:ꢃ8[Ev'9av,$yRF:iSXGtm-D%FQ(zr%,%R0L dJk9;<.3 } _.zW?I_[>E~CFK=Ʈ‰E6/E$#XUYTc-c"TƔT YvTsBTM`4HVz=|3zȪd],]]cɘ֌Cڝqǃ՚hmk0ZC*f0dr4&0PKɨU璥VV"H w664 [Ȍ يN*[5Ye.hQ6 ]w茜5k)a-.9F?B#>l(R`m)rC(\h[X}Lb1("[%Tv506b,((K3b50M6`ۙ-iV j\%"gF/uHθzٻ6dWɽ…&A`!SbL IV HQP$`YNW?TwuҲ].f&c.7-F7-}[ )?RD}tcb$a_|pCaO_8~ǧGN~+UEpɍ s4/igj?gzJYiPTbFdQ,RWzV͎sLOSKC^ՁUp iV~`*ώgTq#OUba.lCm:.~_}RM_*CtTsaj1PC1>[p50FEW4N-0`" vyPoZO~b7Q DB}*w0]^ui85>XME/U.C98r\<+VFzn_(̥]l8- RJ/mPqxgؐJG;y~$ha# /2+Ge<7O)B{'Zܚv0P֓^iGjZ{☋Jx->cm;-qV`jm)3Ti Xaӌo ßr^˜e{prnvP9q/dKܽh]T[&N ,Rg2DO+LDX,-,xGI4 GЛ4g řS/_$Ղ'NX.`Iz ӻ.3MìΚyff̊=żYg ]WG[VϬK_NTcy~bZ0WژvTPRۆMՅlim% Qs8d+KVxtO㹸h$}>C -]O6rйU9~p"gC h%ΚN O^Ք~cJIǮYyţNPifNBN2z*[CcgQ7yasI k/@/wcj)5RHDbt.˃2]dD8* A 1óeqV LHGatu3.ly[$;jy[B zC +ÔLTy-ym7㤥j;u'%S[>"͹us&+(ޘ$.b\ﺹJRJ 4WA}2Wqޘ$.VbTJRl^HN\1WI\,\%i)usA\s0{Zq%tݴJRl^`=?0چrdrY<.l-T v ď9M7޼*QمΩK AX#}HSa ר7NAI\L'iί%)f%iNЫQ[eTu{14=o_';RؿChp%Ga@/Eus\0s .`zI^Ч~74u9ӥRT2;sjub& I#sQ1 Ǎlj ෱~M@':xkF.ݤk%LUf̼M23ZfFh-3^;)5gFˌ2ef̌23ZfFˌGZ dFh-3ef̌23efh-ҙ23Z23Zf̊-3eVh-3efV.3ZfFh-3ef+3Zf-3ef̌23ZfFh-_|=rOr$.A(Iv$I)Pv)D;=7{mPрmP%LVZJL)I+hIf j"J5+mDr#2갊1t J'ub8R糷ي 6qW4-. 5~8OW/MxnoH>]>^e2ϘB`a`FYjR-#q|/--0Lg[o|oxTuؙ~u#nQ4\,׉B\()U$ȔV1{1/4߁|kyjے} G,x'$x3\DUFcVV1"JqTQܦ̇:2,"1rbrX+B"ƈ0F ]Y]Bk¥%!2-)>72cn9ޟ\z 5 119'%Fǒ>X7ac4%LK[\)\^zt1$tؓ(=}/2K޶bh-Z= %c)P)PK`&bq'2%DD^yWh C:d9"ZFe KȳQrk⬘ h2w"u,GI+ajR]]́Qʂק[ )^-;XW0N^.jB-ES SD1Ӛl (h Ը 368x$g:\(]joTx0"FǤ"ү03,* Vc b  LؒXY>: /|Vaſ?w];S x'O<ا;V)b@ZB# %:Lf~{4pZf3ZV9JRJ.*uQ]MJ͌hH*Gҫ`bVď!.ݎ;V,T,{1ev:0ۏO߽y]>7sku-0 .q,A-¯k )7?=hVP^S4UlU|ri ʽ>NMEMh"@q̫#Ͼ:Rvqd3?HYaf\`@+|Hx1{ܖHWi̅X%˛PlBTƇ3>PnSElE|ˣnU:\݃sv⭄r5`\0r4ⰴ;kH)EԒ)Ej At DA"8ZxCtzT!Ֆ J!Յq0ѭބ~<uG `:Őx%L´R)#D |rGOPl%bL2T9qhpEG1 j#wB{%Ggy4S7r~2ѹ.8cBsp4WowTcD=_굴4h T@"D" ;F #U N+1E 6 ;KŐ B (Utȴߋ=h_uroo+^4|%~E,Y*E+S\c7T<Δ_a .$6QʅR{gK.Ut(Y֡Bn…AwمnOlO}2p)NCy6:@g!(-* ]lBD{SPt0@>p,m=8܃iQ0#48 [ 9{ewrNѐsr cqa c׆3l0Ӕ> xu;xU#0L2遣~т-q\0l.d%uYT#V YNZ'h }simQLNyj@> 3};t8-$4 -]DQ{0rHRyHML(}wmI nG7!d&``aK5EH*~3$E$EQ#F@b3z)# g`W& 1 g[ws/F~S i͞^ƠZcd2B8y) nԜ)A"K9tQ@}t?BF!8!@?d4gdd49,) Ƃz٬sPH~RƆRr)`)u lXJ\&&ɦNpH)ٮFtLH.NGz:/Y̐Γ&D2W"-vw6G660z~l[{[,[m-^ i9KΚ-2Ӳt><06B|#"n'!'੊bSń+tJ t)TcJ{e1"rPJiĒ&'J͔(O;n}^MH3/~oЮ޺&4g>8лz\gr}p[4}G<&y|or9Ƈ iS+xäKqJvQK$J/z^!|yuPm!xS*+Pj2W\!lvdb.AHtT!5,FbhmN2xGalp Md,8ƵY#sBHV!xw'-k,׫d6lD%[]dL4Z_qO6HRJEcܲU]Heױtl${F֡BPq3H '}t:BIXR$/P&7a=VYXf12Cp,%g2'ީM彉2գ?p:֤T U Na{#F[[[4XYsj2{gfIf٫Mw4=ozt1 K!]ZȍM:梁ti_[yDwWtw{s4.i?lͺA;nn|.EKwnևB(3o!mu|g| D.=책j~Yj{<6PÅ>L5R+8W!Ȥ},qR#Zm_U 3IsH6+QDtdchu)'Jee(巄nJZ \fcKqsN%/ǷWkzc e'^*+*UTj}-%l"9Br*U{*,(N.&C`%yM[䂠c3) 'S>HF/Y5 ]76e%xH-׫'W8#_=  F>|b.ŁA65#2T 8 |̘fIT(Z>feAHQ`S!hHFd2{̺,!8MUfض%݈Fô\ j;MVPuy4&t4F'1hc Ŵ&G-F-KF Ýk`3@Qthk- 6jr RVdTm iM wʸ_h}Ac/qw"_ "7&1|^Km4`j]Fu$3 *Zvˠ9(6F ɀgB %ԨgF XΪaߖ8w#3W'D}ZlMKvE2.;\ p-ɎGVD Hr.@9$1p1pqoĶc_pGGb7eumC–B0Ns?{fOZ˞DR"с{@G*% \C"ҞUR^ \ ɜ9$JHθ<* 8*{WEʥ)WRJUX">*0XtW v]ڃ+ry(pUH;zp8H{=/,Hn6Tw'7K[ Bb뀂"N,/oͭFs@/gUq^qc #+Qisݟ|4I`gͮtV¾tR_ LkP+ qv0pUPH^eR^ \\Қ ,U-xWEZ!^$\eܲ/^*00]L9~.hӟue X+~g+ [|e-&A`[z4:T|맞 9fc,]qX#O,$spGzK2!\C@‹T^:锞f2y5#{j]p@8`m qoxHL, !G !{9sN(H ό}88or\p||Ϫ \9rsLPy21Azvr1Ƚv՚8[2h.|eG;o:E~hj'q/oǦ vG5F&E:frdKO^6CrS4g)$`YˌGd4= I9rI%]8[:8N sch{ :b,׊,?${tw--lӵmgEVk|:Z{?^~fObᕼԬ,zΤ_Kd48k꯷* y!Ēn]|\P1/q7onlřF98+on1f6ؗaW zA&"s*1kƉ.֏q$_vPyPȡĘd\Y6%RRt΁IPxMkL:#s|jS}WHFL h s!B`R{Rt+TA` nM N//5ر9%Lw q Ɠg-%Y.2cl=[lRO X?k<왻 @VMBKFB $K3 h 𪫁"Onv6m3JNj!]9"K[J]00{e@~M2ETlwu> wK!Zd~v9Ht7*>7\&t6 rs>ɺW߼_O5W}r̓gd{.L$DOb^(Ԓ#ټ5ߊiJ;BLY,"g,%'o=j2WͶ>BHEوK1Tk[Fг<>8o2DJ\$M#oo/}fɒՅFgU#N%'d t= P\ YJro^&+\ |) ^έGOm^hه{nKgiiO{,wHn:=՝^5׷P_H|j^"+/grTXk5x6I%sN[ ,Y;WaE#kˤ CVBLTM&[%~ ,3מ<H7,OWcxW_)y@E VmȭPJ kTZٴs:Sp9Y>:ڔ ǨvJEVHy Nˬ]~ n'(n'(_{."o={_hɫEy`y.Wvzh?ןo?PT޽aL60+PvX})KZ0z]}L*-j'b@OŶf3*j̮JA[YX@72v3g72ng)Gn#X',|QXxmQmI&-/.vw!*)ᇿl0?Nv✎U؊  UrrQ+elnjKJgm,`ii8Q4"KRfru9b1uFnFl8[XPxjݤyBn c U B(ƫ!ێV<[a+*&! BI,P;(XCU|T0q*@G%Nyfn<\Q 7'D|+PumeV- XըKrַ`lc9iBtW3@PVLHڸhK `0ٍ|U|X9ט:錋z 2lVXL<#P$ \(BS&\<.'cGV:]Bn:-ȽdfF >s<屝YqS:!r> Af1+bڽAfcdnV0 2AAB0B7bê^$->8 !:v 1Tn&y$g=*/:IJIU8`jrJS%IaDg0NطlLi*ZP(1ۘeS;(qJ.xCɗJF.[˜=Vu//NHEbbC,zFM 5)mTZH0heqj{RS)FN"Y380Q'%Idd|,>ɺ 0MŰMl $sv)J&En:RaX !ij)Jpn-*+bcu.2ٳ!,ܮFc"){ݾjۺ7yx1hڽ80jw8z֧Wmm>غs E_0 ^Bp%T) Zz-@L(8#4TJ>5VX X@(Hd]"Cd.bAhj9).VcGAzNG1zuLoisWB6?L-dǝO9t08k1*ogHct>SЁ yv<8Tyy`[XĆ^JƁrҺK(TjpTzcI\-5*>ͬcKe9h&'*mq@CMsҝӵ%s/6ٮwEmn1OT8T/+kM:pR0lCZqp:Pg %q1LngF*ΆLWw.5Cag+B́wcY V+,0HH]V|W Y}d>jj޿~'njڼI,6҈[G1CGnoj[az{8;:n}?ErSw~MݝNCW{g㞫wtdkO{!/i_P! he50W>]M/VRS,Wqa_ /&W  60,3O zg86ot5lRr.Mo#SG RUyՊ3{S(rd Ԋ{~9!X^ &`80ԢeG9"+ѧ-eH"Hʁ )' ::\HQ:d85#~:Zi130t3g`!OUH{ yy}}_BZwrSxH݋-jkrS_6a]{ʑ'KdP|@G]xB0vRD Kh&<43!kC5ܔr=GU* RCd堘$ʿ 4^rБGA{EInS3iWmc*I9- %PUQ`2'׏h~wݗ  2HQvN|7^5 JF{0`u4 &0hj!t,(r1&6IG|DJIpy˄]Z-Wg=rMy;hG֝r|p9;rYMZ%]7<ҋHWLʫr8+_La^wc/EG=//WOn=&Ex15x KXXڧ4 ?[w̹?piɯ"'‚ m[ϒky]Nr^X>zk7 lX}tŻ^Ll'Z]uQ|}/j}dY]syv>_O''˔މ/ySF'_/>ϮM]|\&࿼'|!pW/?ae!W2Lj4ņӪqW[X( >^4mr5CȀQR(J84!8k)@acz?U}0oϽ=7X%Bhu:r.)[t(2B#5*06c Aj[&⼫`sLӵn҆4M/M;,jvZ9ru:_|!3g򐆫36_:D12UDrTɰeLL8ř(1T=7u LTK(\’.RsзUb kh򞶤D͇+Ϥ- "2T[1Q#QLn}{-U7>,J?!AgЃ# UL?Iά79ztLNQ|1G@&Bq{䔦Jˆ`5;}QqB|\(mFҊNGR^5nbS! XQm߻ֱdb:J,/)ڞ9Tju,N_q\>fߺȍtmUA[b ٱ: Y{lnWc#YŶ=:V}M/Y}ojˀ[#- Q^^UPo`K}q]ioI+,0%y0uc,B2&)ՋYZx}AB=w HA5&](3AB+%M$_fzii;i/R-}r$D 'v=I}ԖG0ƀQr .Xo5#WU GP9Nq62U7ȡc *& %B$clEeX>DGzy&~)e\o]ߗazb S'z^Buwz~B翚$BTD-!e5 špLŞԼCR q3H '}t:BDI,PSkSмAZea ;ːq"ɜ%9x'SA tɾjlv:pb4\]\_]><7?]~kd]*ͫ%6.ն, 7 /ga8ҍUnVy"]s \擝?[V+y늕+9jm۫]7|ٸFu坯\xW7W[:Ov[.35?xt$+Of;J5l%ۖ.1pCip(m%r\.6ZKܧyOZ16_0Y#nl%nW ,xCo1d'*|A~OYv{Cb"R`9oC1BZsP47ۇCޛH~{ȍ|E:Ȥj,qR#Zm?U 3IsH6+Q%!I&C2N5Y@vUVЗRIk@l]Ʃ8 'f*89;s/Y}ڰ޴O@>UQy$ !Z@K:Er TN9c-'*Uو]Fqˆ59+Ѵ)K.z0hI &CRؔD4 L؎Y8YeoF?RK͈ƣt%ԮBm oYev=j vd$r4}bqDbԲsQ(Il%0d^\]P\ˏ?LZ໕vI&TbddIBn*^vkxk ǫs~/ rBo(~gѵ,t:'n;+tA^ _^]z_-6,^~~M^\_vW|25u0xy s j,r^`@8OVMƟ 'W0pyt%!lkR:-5mB !iMS4>{4˺?!K'L DP VIXPLiȭ%ƒhۺ ma@e,>d3N mb:2}fI5q̒0 >6R\i"s[>Ncܢvɲ_g.W6>oMFٙ=F~"ܮEj9oD%颃55`9/0^ o,۪z+3?Oh8>B53%ecꞢ %\kOTf;iecDEͨMxw,s5Yo+_lYly֝d )B_=/\ҕaWI<+hFx6&!#^11 `2ptv1Ǟ'Vӻ6lqAuJ{(;dˠzT6gs1cP襑 x thV q17SMuZ+{UͮOv#P*'RoO>IT˕;qK9B^H䡱L o4g_Tz ў6!Z^+2Id/#91etDC4lY~\rG8)KHV1^[:8_TuG˗OiMŧ?#v%ud`%$ˤړG >i3YD!]$Vb Wb cMJ.7E1Ҋ,3X7G1` +=, UΉc9i$D YvS:R\H|cbYP2 \g@E"Ykc!/6L89"!ts ->`һ,Gق4mJf^,=:#sy|i(&N16V쑈|mucbhyVxD{:gn%i2Ze ]l&XL DžB [oW "8cvMjlh{D0Ealȵd|J,/o.E+/Wvu|-Wn1;!dKoŐ$S((}DAcP8n9$]7xr>%uͬjhfAFA2Xaڪ0lE YbXNЛԽkU(Rc<~ƺ]4i7iuWo%DksL$eHgk%-Iz1?knŬpe3 b x J,3HT\DcgFsn<HeC>j}Bwڌ㰢?c5R>y n[|·'i>"@\P,s"8osOd缘 >~*9Lψ2 3sIv)VogKxUrv0K=_|2<8Y.GD)Y[G?Z c=vNtvsUUޓ&[,XdW] LJuZ3pi!cEI,>zA?|0vBrSt7qxO~wG=z;:a,:E `+V]K tmk>b׊mQ*ڀvW5 )? ~:(}?ƲWby])N>a!]Gfj{7_!&CIvL_mȒ#J3q߯dI%[lɦx,]lV=TwW:9Wdew} ! v4i^(}>"x\}˜-|cQz;e +S2v`;ia, *0wHsIMn :-fipSb80=s*d;f҃,+e7;#S|KSgH@4TAEPh[)XRQfJ{N(A sQP`.ƍz1[sB')kSWgBʬꡅd|UsD<]굴4h L@"D" PSg#X|G-Xhm **h2:ZpR.Cl jCe nMg; ش:v@o)Js_yYR$hݳ,8nTCpay]^8t# i޺#*>u4vg;g 4vܥ>_QBݭ9^μƳDZ%a@Z n8 Iz͵|7o\=7?7La+zO࣌nwwuкnRv5ڟMIYU>V] DAJfa`[JvV -ٛ&qE9Hh)Aк"ql.[-:y-L2=0Pse?W_w-W/eo0m`W23Eg@vZT2N.џ^yz6fh&jJIQAWcY>Iʃ_>IR.5.spՖcyw_-(*NyrɁc*E|!8S&>AJLQnrJ{j# b#.T7yTA:E#^vTB/nj]N 7in>$4ʢO.?!ꏧKTـ"8B:2R1Zlq (3pIk5g`cy-2"WRRf `Cpo1J[¶Kd =!"}Zmch=D8'%vg%0 Pvյ]~.ɻͦ&VVbX_Mfj^7gQχT QL$,fA[½"'ܧ3,u G+AO#tҐqȥsZsy Τ-6xшYC?9DH< O_.mIxݤ@ h8N996u R{RrB yjR["Uyr;o&;o0Ej`mD6 (RԶg׳!dK| O\_\ȋtz1VX2Z($0M!Fb<*)#EFR  lxa<WE>n,UAdE}}SF6o-ƒǪ=x6Ģ.9J) ~|O~<Q=wUۉ>^oϣ-w 3RbrWw]]͂fQRDl\i˥'PUacV:pH]ar8JK~֢6g|7S-~jUO' f2ϘB`a`DYjR-#qFaEq sL(QmGsȰ, Ƹ#aREb6GlM:_@Cb;Lz5[K0EIr)RēZ"1[r' XHS`a :A'RP;1Em ^k@0-,6rTaR 2BE}lEң0(OX|ΏCL6_ʹO61%Խ=viijɈ9EpőHBOC AW=H=(1 0R01[xtVOӌ𱜮d|Uq>OV#kgo˫gٹ"POqy̺oB -6ꖮ!0̼|3a0,1 z0bɇ6{׽㸱Uַ:}ȦV!JѬ"-f9ÑKUO @S|aqRb]tޠ~߽>ſ?\`.q~ݷ0 z`\:PQ$;py[MC{ӦbiZu?]E|vwPQ:=7YKco^^iHQL{t%Oo̯rzE*J~Xjq{ڗA?{WȍOE vq\ptX$H>ɞdq-ɖR ^!H#g٢O)zOF8R0gB0.0, P1Jqm|Mpt{E-<U<p~fܲw<-WU~n>0}/ӂpv8Vu{0o:>_um掜M*6}Q#ѥȵKiO^Rwؽ ǐ.Xafqw['?Ov@Tr(<4!݁ju@^d#(42@(AH() ωBKrJ$" 1:O|f-6:UnrO"kSOKr<6BK 7ZOI=GpyryUaȒUYƜ " yCZTSҘA `:.Qwh bBfTq\%W3p)*R$Bܙ85Q x4)U~kWƮ-W,"vjYPL}_xRh.~LH\ʍ4I Mȗ0!y!wƄr^ĄRMȬ )9$&^4*-\R,@-埔48`vG+>S>6֢TĆ&gdT阀v&Sʒ %CHy?X.7xu+qoKWn򡧱;؅y.>sWI)_&}Q3{;JA7#4HXi,M&Acާ)/{NtNxSR(9RbRPe(ur"ل2yThn7>EDVΛ`>g PBCĕļ07>g3_.ӗM!xJYcr*b iPVgduQ{R@}MAzM9Ma.m?TfSs)PK5$YǢ960לQںX؄TNZJL)5ׄ )zuIm[1"jiTAT( >L@.eFyvz; i02o["a[w!foہhʗ+y}y}S'M_ VL49`!1/&`˻?% U]W3"Cdr/6ddLIQ*$0IktR3! I7WvimǛ ǯǏn E7g{<K_ %qr^*RSS}Sv}ھzIfmG/aCq_g8M`g_\G[_>dE6IKjh)G`9\FrHRtY2#*GH!,U넥TV ل9ltNLAEJ!FdD`F U:l3q1o!fg¥bxDf -Mo._voVD}MvgIzV1[f7):_,667O;xatoE -@.|H%KmK/YIEd 5)ۺfށ"Y_Ee8)VdVؒhUu+q6#̯:2.NK,eǸ=. Q1dA6H颲N5d)ژ]t%D4I`{*_'v5;f֡{QSnzuޏ!o^=gγ/ecΜ牫yZNΜIia1șc9۾zp\Vأ*WUZC*s=\A#**c*WUJ=\ARX`hઊXu/xWUJizzp9&v+8c{<{WUZ{{WUJzzpA[Ϯ (m1X>M= _8Zh=H}ՙPgn0L=isN1߽}5i!u]S۪ѧ|y2X8Y}bHES"@R !:ͱ&Q϶eH+ʢ8ŅvRXUJ uDpUC1hv`Rp2UY.D[3uO䋭'51YyΙ>(T/+M_$`ց :%5 NӇ\Cjz'dN;5FG]'߾ ?L#W[' &g'S-fG)bH֔&)D#k:,IbTFS OoRØr9㼸ɟKȥDm7s4ntL{p2}zњky5Ji|5 kyzcK~||hTbq 栚dh iSewK1r5jD;4@hfdu*.I[J K&[b|K GPTX,$"{t҃'kԖ9d&Cl% u%Ę8TAnaiEf2ObY֗Cjq5w?ԳڢSHN&jFk-2lE'gH)BLB(,Ptw`}޴VdZl@*bk=oP es KunprھƙUY*SVSrJ!Xg+&(dQwbB{}-G&ɈFKIYB)dd&K>x&lm)x)Q J1n(j R)!(ZG[,Q')DιTdXJ3q6sDFf r{kփ5%&MR72}=$Pi=_bTb> wWI D)%4hScc.ZM6_)uTLCb %Hs|Ep6W3͟;? [5(^юjF9ۊ%5@JJ *$-@S ؤlN[ ɑHD.Jo۴Ĥڲ 9d>̅D$IپCBWCoMMLؼLYYAH$z!)LEx=>#B@uBC2X],9{ZA~}D^9G/cC[k qSZ֮bK{ric%mr6^H2Hf[,$scZf>"1A( wA|&'J7t4^R'R scHơ![GsC1,j2'I vQTyl 8 ).x4[ ^w:9MM\] q[9mCL$eHJc7a0v~i/Azao'Hu9h_CPj\[6,r-}.'Bkm<^"B\q(N{6I;Ͻ`NjrSAxa&~eq>eԒGͳz'|=0'lRA[2{ 8mD'Wh)'~20=LI ^5kfxv}d Ka25V%S>t,W#0|o7o{WU%Ux.[dN(>&'qhLۉ/l]^q^zAYһD{ۋlu]|ƧUp^oag++ʮ8ц\E>!7ͺYIɖNa.$L}:2W)?+mjg{uN_OOoǟ>~~GuD+0HFINj0<>Ui>SۚO=_=浢#o(y*Z;k+@R~raKK2}{SLtu=LMb~,~CT!U;ދU!`&O4)}@[mXbt$^ %QI9Iƕ`SR)%A)?BI<8,t^ӊu8@|YoO<1M#DbhlB 3sd]t\JqcwS`gc}S'[]c33u^rsKo\z%i b& ijiφ8Y&π)AZhAQw[<,o^ tJ t TcJ{e1T"QJwĒ&kTmC-tLϬ>x3!μ-'YiLh~0`epm4E;K,54W>LF\TiKjf*5gBU@C*#XyJM暠kP"Dy  } HTG F9Q2Z[L. .AB霹,ǸV9kdN] qhȢt1~2c)V2_B%{{#~]>ӤǩF5j 96-kad܅.몑tHI VX BAǵ 1HDlDI(P&UuDU`CL )@.2Kə Qrw; BWvU#gPnHm[nCHŰBmn^Fŷ~pyp!v2˦38쎡Ca?\Jڤq/͕/@he ;9!2 썥}rw3{ɎAhWlfέmy|wG{rv~(E6@{}x,"Εg1]W6 WzKؒ5u(={y.1pDCԃiˊod+}eܧ1sLV0,fI7@6Nb+c0>9uC7~Kp)XN GDG/U 3IsH6+QDtdc*8QsM]D׊W }!!D1ʂZ݂g6`))W5>yfSUTR) ͞|WUt$ !Z@K:Er TN9c-'(U D.z8鈺59+hڔ%z0"r&eD$j974s"gRR5c5r($R TӅ8㹺u]{9낌Oד4zr;*vC͠bE8pBVM`Q3RU4:΂!ҕN#*E+3 3FB6%' AC2%ɶceiBVŇ˓Bj[c0ǢqdzUv`qMhN2 Ic'iM@-F-K_EVCՇYicYȄ 9%5 I 5GQ)+Q2=@[F})2hjq;hD}9kɼL eT\GpoAR] Y1mlcT p \LH57H#KY٭oΑ@zq\:&_g5.y^"b"%qȒJTbIL@, thO =u_C/ NǢYTXwq uяopo=|Q6@,evb4H\4^!@n`!M뛠AUh$ڨ8T).qR\`:PMcwFi.ZU:{$O2!YP! F6@dsȸL6Bl*fɩMB #jG(EJ-+Ly-*B~dw/W˕n.yxZ]c vW4ِX~ hߎ )36ߒ5ڪI@eH)CiH)79stCHO\=z5zR ltՅQ`|tղV6{f!7BZU5}SJ ;k\ǪF,NKls `oHi[ 0ƻk,wsuqt@2T*()(dJĢ bJҖֶKKƢVfn0 A&A€ʀY, }8 Bg.z+td\ed5rL6m)Sz31 a&f-msu?%spnv!^YWvxgBxT1K1Q2 tŹyƴ39xV@Tb`ٻ6$ .ƚr?_í`xds?,6ED)1z0,Ӝj˝a _g%g֦0rQYcdrld<J> o]røT*RǔOup%$S nWSxld`MH(wp3; U~1N؁ZKTvS{sq3t 1idR]<C(%*ŢZØY\.S<^!SC5nsN%6G_2ރ\A/rLȟ"b< O>UweKǫ/ 麴>f,1*Zo![K)-LYgt))P~4X9}^=>;A*$'3ðYeBė]'%HK&c,]fFFXHJ[IBWSyhc5O{Qaν7 -me?C-/|x _$p0#s=]cj7=MXA @ڤaݱ7_Iw R%G|A[|KHb n䖡h7ɷ'rhO1qS=&$q%'lB.sQxk9 x^epfsˋ%cAe/$c!4FlP9q@Z`,&|zv`jeqyǪ\`^`JTu]߳5~.3p?:˄l/uSsCYݖV {Uu/|mz->xV0a:;$]SӋfnj_MZcWIi| h%LBY/&݆|n()C-u5^vj%ݕn,,;g|oL5נ6kE7|z6[z2~.Ϛ޿V=gyDpUVhઊc*WUJ+wp &="Ia \Uq? ?tRWZ~]U E8tz|uRwkڟ9|mwrΨ٫Nl-BǔV~48[u&C*%>CMsA[ffj\9(N)ngʗ´溘<D+0Ѳ o' =˖~:>Y~ #;hSq~YFݯ+a'9ʳ̰-ZDsڨ#&lЊhIvunoTwOw|z}TXxJz"$s*M nRGq`lzrYJ$&үݚHjjWß~7Qؑ Yo?-|'v+w_07鷣Mjo+1\,MQ4 դW4ؕ0#VKi$AY~*i 5JSǩ}Ɛ@ҫUҊa7'uYe2GkKuݨhl__4^A}Ғ9ˌmJcXps 㚕W>}gb5~4_VF?OVi͏G58h9gSMnIjS匵m=r;nx`=Z6xwiޫevZ{kt?wq{Qm7%+0R #1̧?>--!oȓZ{0=xYΗ0d,lȒ 3 tq7Gj+.L~iɵ u[P^$x{tZ}vT;zq{;wQ;NsG?/|?ŵEeӥޘ މ!UeG2k;* HZiì+Cz%lXF!kp+dI}D+U`x^|A_ᄴRY$[|+$kVqY!Mzq僉VSNn*ҹ[.hɬ*ttr q%i=bfxhFSݰhGUF3cd4u0*ͬЌf!9|8l yMcWn H!'*_Gq'͛ʅW&(a@fx/t4e]|Om:`@}iw7,1m;h7,[?}KUJ{A9oJkK՟!Z7s۾$v/]r4Lya~Yuo}scZ^t R+ ??G&8AÓ@AgO=Ƴz'dpΕc qf63{ˉLe[[zD \>˟xxzt1V '`$^~Mҷ6T{^pʢjp74#LqvqCT]/0O e(/jJrbXzaa;~QHfC4#e.BW#D[|# [lR eK+D:7dQ:r/ 5]<&YiKYN}j[cxd:R\Z#ce[dXp^IM%.3I9XSVϷ)'m,`8(%ḮI"0q*%?lDЮ8y\%g9Қ}ΰPa $CPD t#9S,ީMWZ ITClb BjSfEe̕ `R*r*s|%a>Ij,Bhk,*j!RR#N"֕g$gwaEs39>ꋷkr2O-]A5`C#M]u$I$X^ Ta!] AbUJT5%b<>k/ͥ;t=(Hs6:s%h_ɑr5G.i t+mC ui"UC!-gs8peyYβ5@8`/=ZxPHIZ*aSXC! ,I2 q`RS.tH$ 9WX\QZbx#Ɛ+LgfyE"g\%3`4ize.QG3^B@tdv9RbRFtIm2*VN kiG@1%[S.IE F3Cꂏ qX%1iF5 &JBN ,tЩ%f2ĐYϲqL *lT }[:*Tƅ%xCV$+NDIϐBu.l#FV`$ b}3WX Ҹ6[SNl{M`"VT B/^', Pt֌)l8 U$0WKYqb! gEuDR/hmђSXZ Bqϋ `M2KU9&r\F/K`B]N*Ё'BlU{Ces C8#MABz㭲@$|La%dA VH]! M`ykcyk@A{Gaz2%G_( a"e21I0YJxd _X[i0q#XAdp "uMVr!Ƚ)و4c`q 92b\j` FYb`H@ :^K;Jbc.%%uD"sL׷K@mp>C@@^x쥈 J1Rh jn°pZ{ 1lF Lr e+]'j%Y!$jQ\6,Vjw0@SBv L|0Ggu˕\p )Fۼ HP)IP><{u`^1,}pLuV[(Pu*KZL5tp1 )v1 /UXv5!=)xPiDH2!/C`$ (gLt_X2Ѹ8P{2x ƒZ`2ցGAۀ8nYɂNe~D}2hEvUbl3#!/ˑ`$ ` bvovz}6=:ưk~g'cf^"`Ax_b=V yX@5A4./-Py`(AU׀R"$$e` %Tx@q"*xQEnTIujTΨ<BARyPE PANʀGaαK/,  3gՉҠ ?AL j @l#ȈH+g\{84W}V/"ٝ 0H%;APWkԈmV")Yh%`,:TewCAesWLj G;U!XN$Jjƚv/ta,o3,4MEԨp Rh›WT*m-=+Vc!-7h[ 38]$/kIF ak C:u0qAnu6i6ʇ1&ZdtK- B][TqtU  7TEѹi0`خ4e0hky.j ǐUFCm bL9|*3| gb7tJ7C|%pãlvs A;%FFw #TPyD@H>bpz%C)Wv ;`X7olWG/&+lŸ"0r\B\$\1[94A'yMhA #v*ĶB!D b #EiQ-IUk1Q:H!@24?{Mڰ(RVx'#b hNiu)zndjRk+1֠T/㢴5byPbҧD2TSEc?],^FrAp^)SC[xYk8}(Q Z#҆ D o7APX?h˙-+Zk5 Ѓ ~4BSqc8|k( AcOZtQ!,[JU&󋊗hx@8\Do7j׶CtZ'$|U.uRL#pjQ 7!"!2lw]Ѓ*.0J˶?y]Ztj+-Xx3 R GeWKqR\8t\㥰N1Y:'5l?Ux딳NA͚: [Ch*l0Ɉj7Sj We#Wѻ{2o~a~2\ϵW6rvKEߟte4+8Y(E4!ίJa ">t>Y) av+' zUs?O'ӓ/[}O~)MӵtOsH̕Ώh^VFMF d >mӳw 6s?XF : ۩]s s٬"cܭsp^в}UЂG=um_~'f^uѿxy h_QR%o9A.Yge ŬUs|[.Յ0 0߼LXlJ uY /ڼ'Msc~/ u0BwC||DGG|'TE=m?++nqZM-˅f}_ݥ[*m/ٶŐ{?A:>y;`F#;#GR1O<͇uw f ?"mC~Ak~esVgv r>~d/*yxC,Z3d_ޖz"Ц~Nގ/n?6<ƴ"Z6$9'eTjRsY1KNQ'ǒlBVw\5~ܒژF˒kgq9 k.z­@WuSj潳/g&}_*w7ec.i޴~\s[7KF OfP ;Җ<}mgKҪߗ+ՀEWcJb2bQk8 g-DȆAʀ}e0s+[$)+Eو1[ĘQME6R]6L֪\IY}X>==`rvvòo a11"*}L$l,UU>%(Y\vS;>vU]:TuEu1ib n Ȕ|Ywm>rwe_^F~|7}v t9A#CWO7j3XY\mj},ڧ^~j Z\W0ވW/ǐkw6wuTӆj<XdJt9٩bCD@f YT~6gMaI{N9/}S-<^'7Y agJvڄT^$cTPQ('"yIrЙ!vpr0sn{ CZ|whBkWF ەx+us7fQmB˂AmJp:"[;k(KQ0NwW_5ci`=R"5ńO&}0̦r;GB vNGjRwkj5kQm pU;ylq ^yZںh~qvRp~?[۶^' ':?#zbew%Iݸ oeV*x}VU fh齳Rl(ꟽ\Mµ?N\/)m̥nY~\|N {XN3 _=Ƅb_6*}im~n~LO6_jy"TP~ 7h~Cn{qϖM8anQ}Ltmc6#vq|Io/=nYm'VΘޛc\>[\41iӢXSctU%cf#i2=,cw@Yn,MEW5MJ(C!$o2,d%ePwH~t43v|7ǣGvT-j9ssMN3TScvmN_$3du[쨃sG-d*ru{s]kNn,U%,׾,Upf?yTzq,m 5v,kgVM+!$JG\mG8!enƧ`5vs!3f U󅃵BO|՜ۦ6iqq Yg+qCh6=v_\=apUxSڭ2&)]8T E 8YQ|i^ȂmJ')(Vl+b;|Mmg㡐̹c$oko^ۓ&Żiug 7ٔhLȌ1.43B-%#Yľ"Ziaê ):"d(C `X\E.G(|b\c&􇃙s?ըm4;G}G\G<RZ+/5rȷdeQhnA;n!٠;߈ۙ5WcVḺ'麔I ee\ 3~UGu!uJ~_$S p#m !6'YU2r(z'@|wP5* d/>_<8T8l }_U$8~BُX~tE6;ռ.vuJ1ӆ.J.( 1OڊKo_{GRG_Hyө< &b4@¸^wpʼX5 d}e2v*xk'$mn6շ~o6z2Ft}][WKװ2*s<:ݲ6֗QZ^0q PJk#+97_?>[}zlgا`H&NRGRYF,LK#];V2cٺbaFD%qVJ\Bz\x#%8Y$B<99{W1jc?(^1-i&G\߷{ף3( Sm;o=|p||MBS.l-9Y4.X3yU+H<=BkfLp %:cle''/.{lj9?{ǍlrX|SpN\3ņ5kiF;3,bwSjiiI38l,Y?ViAH׭'&}\Dl0iQ),Esg4YHX9+VBC9hO?%yB>\K2Y-)InTe{jpD?%0B A&̂Js6HQzqe";p1o'(ĻpK3*62ٚi0;vgy&s G<7?:>'&zفY T`RɽA-&X2ibkS촘o0nkijEK ^:.!D/S45($"gLxkHp)$6 1a=36p9ǻIdIr.<%O9S%ak3r$iq3|z~&ŧt% WkQ1IF\ĜX\pkY+l$dK)Q:Ry܀Zz]|LQDߣblB˫Q,;ʄ[;#}xE{ JkXc*eJRBHet >QЭK3PHC@QgrYmC5@tAHu5Y1to2%=_Z KCVlr"]dy|k,`];mr &}29F0AGa' J $KY29I󹵎Jz1BkIVpNL)JSU#-OڠfhPCG$rn=u#/?V ؏f&auH\ݸcTȹ׊Ifb~;wBЉg"F,UVSmM+17_J olU @8J#D(dpj..;Fꖑ Ip([s"[OVpp(MTDqP ( 9J В z-Y9pSCSK~sJues?Pihr>(lP6Oq\ TÈXR|+s~?Ia7\7LT֧HLSue%u2[FFu ퟓOJ2(ӫal?7[K-ǵ; #P(o&WiI+?njVBb\q:;?lNYb~R>ǫVPZ%#s*a 95&Ӂ .p8-$P> #j.dzuwV7^^O 慳QOO'239C./ޫ۟NiqBZzBFm퉋{:k놶wyk7K=aAQ8u^#ӫDf}z'WJnu2ȶ^ cd(f Tap0+"~P ӎk(5*qMQ7׷ ;?~o/o_y To_7qe+ rm~[kp_rEW]ctu3+iCn:6JZ1c_.|{#}ծ&O kgVBW~($ 4r9[{,TQxQ&użq(K/!}ܬWp } @\ I ^$c (s'"T"3(8mas-(KPcKTq(3vB&ƃ6ހA*N';c5 vc=0Rg㝍\bpҠ r·y|€C~h\ &sr<1\T⿼3&H e0Byjs}H3uH1ZbP.NC. r˻+ic(Ip]>}m[ėLMFluI4'$WuJvpj*MԜ(V;"хx[ҝHL"f.}ЩHuf 2L;5,]6 _@b^sbBBO"k`TZ RCw.*qQ\p:x-ُQ"(a{< 3VJFxKEAt~G9u,HYrH<&HPe2 Ij9V:鉶6ӱY3rV47pW;Uj27B{g>b|R?ReD?]r,Y$ " 462f .Sde沅6"f@cFDV[*y9N1;S,2Uvs7(g}|< Ʃ\uzokkT|iEt[fL_k_Nomyg<@JX]9 'ɒēsTP!q&H,3(*i^B))z hs5H$ iZ'GO %5Z(x@5Gָ-ZZnAe7Rhg-6IՃDΩosYrc}rqmUZ zy'Ʈ-έ1vQə1;yKyU(}B_~.q?rRA,"BL.'WZ62+Ɣz*lGDrkm8\eja*S9ʹoxNMr-pb*S9ہ+͙Cœf)M*Bu0ͨ2`D vHXE1G)~;P⣀CBV7gIad]k505b[`x]L0-&2,G ʭejtT÷W+ƷD Do \erBj5!WJ}p 3SOUfSZD4=s;e7+rD,GWN:<엳O]r(1F2jt)1%N.8[j Z=-==}Bl`ɺD&le`n~/߽͏Wt^~z}9=\# !QQFpjJTP"' 9I ,#\n֙t: ΧP#o[>'=4W{׀9Bxùڎ-^^{&ɹLgNWlpT/-9xb&Z%qaIX06&msH9Zc,|I|kOTF53r;` Un"Wyϼ(F{۵{H#P[vt\Tɉ-'w>4tي}n6n~Cce({|Ԁ6J]RO.r2es~Tu9ږ[䐨جp.ꃂ Zyk@;qw뀈WX#g*H)M- f5aX?{ƑaGÀq:9M`DY쇋aSbL i[9~3ç8$%$-NW@`%N-U3s@݆@:$1r94Nn[Ӓ%Lst7cg|{ea{8mbԣŶ)W!%N!qgI)@ #)6gF H>imw1##[֍֗v?0=/u!7v2FMr1~lts//x0,@_ V@wiJ+i+^;md:xa_Ƴ*q̞6\]/ǫn>\L&G<^14#ssmҬe IzIWHW -i؅0!\pm!qr@[A.#(/Wާk:#K-pRw.AݝC;N#afI6򶜆kRnϊ|d5[Ih쉫bfwЬ(f:7m76 $W s;,r?X1ocRlz"KGv@sl3`ŽNm" suo9YOݨثpb98{Jמj*b -fNk{z'剹 #_SIR59Bc"Rrԭ%NN_?0Lcj)5RHDbt.˃vᨔ%]Bzien~Y`;kXtv=NM p߃Rv O^V\r%ɟ086e2ϘB`aDYjR-#q=܃嵬`]ty.jB-ES SD1Ӛi (h Ը 3.8x+{>hV@3-#Dhѥ*H*-03,* Vc @b @LSOc`^"RN3GeͬeZ]b ƜTPp c ~Gss^+`B_Ou4H`띆 c#Lj(F`h5;hMiHH2CD.r&1`kfc*njEˑB2 #@Qo+A"IC?V{asI<'B0Up@eIށ> &rSU $)Bl`ָQ1IUEΐFT܈'CQed$;9V$"a8obMYk;=/ HͳJ`-8,6rTa;R ‚ \\ViiBQ8~XMʓSW>TY%Xٗߚ*Y`z}3X(H8¹; ecb"Nx1Icg!a~=5a]6uV7OfF2R?yS=x5^5^g[H/<,hxyU,˕]1F+~MɯLXQ(^2ZFFm#] Cڇlfu\P++Xq0~\n ].^&qu>ɶQh1j.3œC}~O\zVY9m)::'Uw6,Ǚ\v˯x|~^z}1Q_7 4Tq4 EGpg~=] ͇2r͂0"]NeܻRTT -N֊(}4 K|n!EOdqfVg03lnE+4sJ_trsRA˝-Sm2}`1;'DnϥI''-az&0&€\CW.k6XȧQI{ɱ6$Qs-9˝R 3w?&ݍdw!tows%>{tP0vY8 Z{1[,/Y~qd=5 jmzJf#_ t8 |} E6v4߇ mga+9<^}nU|(ƟhfD8j@GpM5<|:uej/jO1o(!3A*uibqބq_N{`:Őx9D4R)#DLQ}TF%l%Gw%Ds.:Q@T! 8#8v?G h*%,D`"RSFDr0+Cʘtf9<7ۯ$&Niȫ[m+ߵM6S#rDtgҠb&`ebQP&^S`r8 1²N98Yh#(bF1ZlH4wF-(y)!92`阃;#g?oY_OilW]3t m0g8i֗"/os_xAs [EsQmVwyv4/j4wIQ Ǭ6- Y m7}!v<,HM܇)J+uGzM||^˘üpyG [s-ΙgI:*+u+"b͗[E[,nܢt[R%ʼnw}HXұM)mjHr3y"|CI:&=cR1C2y)!xC'$HbNxMQQ1)39 I&XQ76H WƁhrN:6`iJ5gj4_͉W~Adq-S*$e{JJ"sÂ3|=er(p_ USNGSP)퀘p\3LIkp$0:I IPủ'_%CݼQ92lPe+Lǜf*S_X)ͰlBVQN1[h:U\+uYB");g`\"s,'>A"cZ) <`Inз 4vx'l]{6.mވ3^| ͅ{{c8瓾Ycqೱˑx__ Ȇi5:|y`]orŋ Ņ:0941h!-$9!*&@ߋ3;ٕ JT6i Rf#qwvigǛɞle>& 2GLXIx4w_S1=]sFp ƗqBv_5yqHi.GyQ)6B$q™&1ǕGE] ub=_]zP!6JP %&2` u:%* *7F+B y"T@C|us7,'Hmif~y~*`%׶Tc7r(ˣ|_MrJQKIMO bW}FRHN;ʹ"r8*mAX 6 XZ}*Fi&ruPHY"!(z&g F875ig_h>i출QC=kl9jK<-7nfx=meͿ2Gb/z<2ݚJz7rgf_{LlvvR?6)xwctMWzx|e{ Ē l]+ɫ/nVr|Ȁ"-V}31Bft aZwcNeE}/4w\b$abEct֡Yjl+67986 e@t3yJH(dxcq И5d!;Nq~r/pB/p!;ƙ@Cd@Xg璕cP1)N!"NRe5d/o1٧ );^B%X©F5"8G-71hAL242O=bUD釩D HG4z3I)DPIk T:lRDŠZ95^Ld*tKtsH͓Gr{%%HNg۰ܶ7 UaQ~W*k5?&mgc a]/r"4*AFm,J!$HFZ)C}S!ǂEUK AR-fڄFe,֜7*laX[h-|ViSoRf(@c//ǟ/3YZb*R19j^rbWZ( Ee53"`lpjT,e;*qHVQYiPRo?8ltJ{fc"+li.Fe +rQ;:oZHE'`9B,U>݊ Z+|*0\cwD7`QڋbcF"Λ/j] ei8k$H!5cS @UErݝHz\Utmv/ZB{z7r.imm˩(EoLg4ZpЊyZ)e=ಾ|?bQ1O\HJ=2$_wj%D&i"F ai\^Lja!Z@0JKV : *XFc`=8娐@$.}b"} j TNΤ Q娀dR°D#^LL2r9YOC5=[MOcD")9KY.$ ~ gY'bZ"'_yqӳ̉5q-dzzA-pk2BR)0j 1i5NeI)p;SbZXJEY[tQqоF=)* .7Tˆg^Z>X?p3n#ZqЂO(##h4hS-!0=@Jc}57㏹"1EB<.&ci₀bNN{`Dc G_TOُ-O:'ü@i&x5X_;I,$$I^.g032]ӳ6J'8 9% _͆ltCՃ>QUyRG:&!:ɤj| bXnkV]M6Oj3&yl+o(8X߇pG߇9gݕmȖj5V>e'DX=G^kXjٖ]˵c.guo@%vߍFGJzV\|MU@Yv>!fوmR}v[͏9aAW` Gv$Brzȟ2f=)]NGJ[̾jеC|8Bwn $z3n0W5k/g[ތN'zs/]"?3"Z.ܙ_\T׋=|8pns khί9쳵>EͨijyӰӚ(NPڰxU UmS`*Lj)].:VcN]O]:<@l.E/rk:^%!΄߻PiBi4ώ0v&,=~;fh]Lte' %FxbtUc *ڦh -ófriGQ7+sۖ~ȍSYcwO(y4`ͦl2e-uӇE\23݊oƑqrq\*k*7R|-~;\Qtڞr64gST:V:Tt!kmHelW񮃓*1HlS=3"iu8&g8]S]opBlK*ƥĔ#"oAsq2<[EkWr.*Gd4a=bNF$xq.w8Jk0?Q7S<;l?F|!19̟5Töe2zQ*/ZFx9guy;ټMXo.^=̦r[<ݩދXb`osߌR!wAs8SN~0>6aB(| U4\,ױdN\()U$ȔѺ6ui?-;^8bc88 '٨Gp9rF3W%_ oQX->JqTQܦ]:2," b#aREbص9  M,)ڂqA<,şu]kWK#"p*q.BNؤO4Z\_U0O 5~s˅)c1矓&BOx1( 9pT֕M|5#hRot^~co#TSn"iqv9%U^Z Dp};81kQ؜o@8WRW{yd]%+m3xwew_Tkd2 im0kwE:oen=fTrg:,8GU&xQ?pӿ> &L^jqY]\]8(ud#pﭖ̳rʃ^/0FԤ[^Rg :X,&Ԕ,wQQj 1XZ9uzu4I\'Isk KVac:zJJԐ`( z,bVSXb8G0,~Oy ` =,wx-)O.tXk`k΀eCk\UcV}-gV}z~W 9Unb?Z)aqPɴQҁ!&[nwpk%6ܢ]U'Ѐ w-ήG{y+^)@;C-qDbKǝ@<-^yWh-D:h0GQc(,<^jFuw쌜%{dM||])8g"&Jz.MU)Y0- w"@-O~ع,t]>r|*\ԄZ$Fab5m (hdٔp{L|'ݲf:4MX^G*<@cRɜL̰4N+UAPvdVX10np=f] :@ TSAuw|Fs=dmX㛁TiiMy).M%JU=|jWɢÃ˃J4#zen0oML$ O'6 ]8RUΣ& 3j.-|uWT^]_5_gw#~^Yg+;(Ft K#PHW9wt6 ifsyVy ЇQ)v\!?o& =ٻ]79[G%Q'lm ѺPuCdT>\zW,RŰB~ԗ ׻<vݏoO^ux'} 7W1~+ bm~] s#] M-3r͂/1"]Ne/3QQ9=7լ}W}oR}ʧ:^2O~Rao\Z"4TQ-f.|%BXDC[;)~\WYDC%-fipSb80=s*d;f҃5,AqxDQ-y$5' K5:2 qf\DʼNc.].+rEZw!HD ;p0K  pF94%\0$++ua0|YB7zptѢaFhpv4vlvFh#s1Z07[i,swН'};Sٌ]ی3vрNl>JSs{u5̺ ٥!CVm*d>%Ec&kMA_<_Np:=K'm[K2a/{YB׋3u|B(e{N=[ˎs|å:|pe߳zx~o;r/ǑHKz@B&OWMGW9;*#d8Dڝ0y"WLp1+5$y{9G(3!뼋>euQ{q-XVvPAȨ]-0 ÊDj,'>cβ%BBzߖFP ؖ \`JK)FD߂>P=`BUZ %cuwaӎkWr.aNǠt2Z''+#uSU"=39?L 7=_GI58γ"O]5&Ӂ% `P ۖ)Obo5a O]|QfbNi?BI"FًɐK!p2 qp"zAOa! F"8rȑ39"#cv 529&r6ȹQjEw9lVJHB";#a$t P0n!-/!O/`O l=C \4>L&&gx(7̡27M.s27& &l(727M.s27M.s2{,87.dnr\&enr\&ګ]pKDJ*+ #@-EhAF8CH'osOܓ7=ysOܓ7=ysOܓ7#U)ƻ/챴ZXu<01J- ZA܇@ݼMR+ѷU5)+6C_'!S^I\IhdKܳoW-k͛HFi0u R\~ۃ;?OySM/L8D҃Eu.oo|w:qv0Ř=_2}Z={iy Swo`MKk[B}{K뚑A\0xb`:OƣUG.NJ!׵YVgu3VR<|243F2+ŁL7PVj6UjxLxq{'x8opûP?w?|8wdzj"h?tuWMs{]v]>wh.yM7G ha:Y[R/ߍY{g&'5QS7.E60΃ j~ iYE+_K{8؂n]ZmVecLqxoQ" VJV CArVƒǜtmfk&DiXW|ayM>U'`m;VQb4Qy4:b8 ypGm|` :KdcgFӓߵSw]llL\Yl%; evV$,zPbs\ӵqMU} ׉a/yɼ͚0~wE$7tN5 61D¿a AXyHf|*D|jD|:hr景j(;mJ `A(5_1)BeȐF|Qh2qssq#m|f#dKvU3#43O44ħLI7}飬Lf,}ťfqq:"7G7sԁ5 A4$ro)RȮWzkeXXϗТGzGM;JJ5Ɓ*#va3N"4zF+U%X;^'8a[`֗zq~sk,23|o}%SOK{v~Ls9&mKjP8hj &cco={ƞgIi Zq„ YY* (Hh͝.Y@dt\tBŜB^ r(E(1xFi:^Lw3Y&Fe-,Jos4W*kߚD:\-c'O% thJ.yno|s#!/\0j'̬9J\2h̦ARU]yAEԻߢC\]T# L+b1MH!E\,Iz1`略TB4wٻv@>?r@C :Lj%QM$cv0SbvEA2Tj);UTB^3°HSP'$V*Ca"n-Աv&vEiLf)M;Ƀ=O\J8y?ˆ-}YEvfe)!9|*Λ$G)Ш#[2I@dSZ0FtKf-9FrQB2Pg6ej`!cQA0HuU:F}XnZk2/n=;%흿(Єhh{OUrbѲ=Ce";l|=N)*8Hv|(ĩSuBbJVTl8>¾8_$zBoز*{?ҕ8;N/j_e}ڽiĻ$kMVɆ1!aC-%DZ H+66@2^tRdk2.J.6sѬT cWaǰ۷_Ղ"yg>>!_'ӳ (]'R N0qy&+*Oärќ̀bj+Hճ"y*[sM^YyKY@D΁3Cev:Pc3qvQZfc?lO7!] t1KTi;, GduQ5x|8QMהV )EOh#-x(1R<K5)KֳE`[,2jbbBQF'\K)Ζ'):mCڍzԟ[1@EJ5:IdR`Q~ڀ.BlؔY{jl]s3ruNrbP{ݖ?*y|^eh^HhC`+5H.4ѠF*!I-HD>5|2"I$ XNWC-8P"LY{D1tF"\ yv@QT$?=]I3(,!nYݠwأ,lȼʓAj$-tfٓߏKR}7Vco}%H8ga8s?7 G4od/'UlY_QJ7l#H>$-E*ru 3t\ρ^ ^N"k ֌J{qibpݜl{Kk%,-e-v(%kt]s #W(ۇg;`+X\ XF_uR:j2*:dԠ F #*uQ*%uA*[+ ()8Ee0 \|+4ԜL'_Ii3J?FøxUnol^-_B3~;ZƝHϧ;띷I@:"B jɚ.XDE&OMs|G="0  3@Ve< ATL:$y'ݚ bQ&化 d5:,ԬJ$]6 %`:anGt@~%>&59խ%srR-PkgYep1FyW1i|SLd,%:b"$cjt%TUdXꁧLr=kG_*W?RD9.[{3%>}s17WnOMj {bS eN1P VH' f VHҧw+f2\*(H2`\e_~E{tg3hj'4SbCz8٨߇AtՖ jOk{q9M+Mhp/Uĵ+JxM5%H}:5Rj~BMjTU56ռMnO +Rtp;>\xqW0FEn)t~Xw&Oo7qZѯid_Û@C17W{Xb?T}敖>­EJB~fWbMeӉ܌kkc,R@a̢aR{M0iePo6y'}Q2v$:=0+'N[+s> nF[nM=r3Rb.ֲy=gVtpc<)מ'DjX TNj"jnץ%QY ?V?ϷEcj)5RHDbt.˃2]dD8* A »v^@/"oo&_{[mvu Ņo9ِZQJ'u_vk_:l'K^6wCO:9C0C71Tr m#2WI`\%q>s \%)+4WDHU1WI\\%i$䝹zj<"s~ҖJj|, HJRԙWhN/UX1WI\u4{WL3oJRlg^X92w>0&'#Z:Y|hwh}j|bjv30y$^ @.]Owuo`} >lRf;IdJ {uр'?' Ry ,'Tsji h)< a 3*]beIN|^~) v x|n!b潎sf0CF"0PL*U6d/%K>p2 qp"zAOfS[=ȑ3*6cB)7j<۔TGɑE0䰅G)'RHDcۗTZg%ق>v;HVܴ/d^ן!ZWQ`[⅗rgрmP 'L` Sws߻m ܳUDIFpQz" uXEv:m: _1^KT5`-y25γ&qfoYF@A5L[()D VTTHYBS"",SMV&oiIQݛ2X"Uk/؅4Kmo^p.o5)*Ȍ{yv1u!#ݨ3+OfJyq t ^t{`-cB`f3J'\t6Bp'x@RT#oWڮ [% b5B}gv|Yg$e{%ckiiV1LebUS@Mwr0bae`1VYE 6&͝QFGM{K*$l 50Cܚ8gRp8 'dgG;ӱc d+xE,_Y %(iYE+?-g/hBb/Jil8™ gE":Q}PHu!Si䃡[^(䖲O!HD ;p0K c88 Ah Wab$c0;hYro@tӢaFh [I.7`8ן@WGFºKx*sX?PUcwszckb:BxU*9-U]&;:  сYuh@vJ=\5AD@l&C51pO< uE#cwU,۟\XCF}WLg=Z_$nkY$x ʍT,ES SD1K'=鍣q_t5s 4>=0(Ld1dN\̰4N+b (O3@2pOBӪ*ք kݏ^ nQ4uR Ct!xwx"k[,^ީ~Կ]ϵ4m4; 'DxF4Q98ƂP԰`b6DEH7ڝH$șĀZ"ƌEˑB2 #@Qo+ҥI 盧^ʘC;?o!*\W-Dvc𮌆 F.5*JӡYڅ)zUy2yR>EGIG{@'XBe)D0°ܠ )Rbwb"\I13x}:MU@`8kH ˥CRJ%T\w tt~|pW՝^ލ)MI=J0O~+m̷(H8tmRԾc"q'<(֊KXB JU@Bҽxf_?'?܎/a$36_y/.z6sEhoɐ& 3Z[nj47Xc3Q0iJ>.fmU㸱UV:dSM}eR4mHGa2CRȥo ^vO GV,U*VU*@'*\:>=wo>?|GL? : IbgIwQ< M󶚆&MS6iZz7hW6T+k9(˛/\:?^ B}fkfq}Ob4'ruT?=j1=wKAlߦӸ=uy#-b%< |(ȈWN0adEO"ΩA3xԚIhY#&ݎó8Ll^VNq'Ϸ<ؑQo RPG@U>g03("RN;d ;i`ۛwpm6 h:AщmSVge0]w9ūt:5-UӋ0i) ѤtJ v,)RzJ$2нƔNq{ "iXgscbf[p^r" $cLK21Dw`*1K÷2f+|~ >ːP.;>[6 riIHӫ͟U>V}ٞNtsb{gㅍΜÃIɭh`ݠ93.9U9X51х+ҎĊc7HNGY<3-c1 %.ZT<Y,e|D:jXXLjAuTu.2A0ɵ r B($S1f4j|9#,RFXp4g툳5+@\QפgPrhKIii3= ҫmmwPeLVvO2ȷV[E;6a2%Q"sX[tdp]Z.ȁq)ݗ zy?G)jJ6,&o! *5/:K®X,L2z޼͸΁CZ?Xȕ)oݘVWjR 0^c JgP*UWWÙk*YsoWfY!̠dVkծG5 >ߠ湒!eiot5Ay켺ꚊMzWT5'ݟ!ee3X.؍=Gb:y\[_sD5377fP.L]Ole#$"cLx;GA ve]!zpi\qaG1u, He#qk# !JgYARum`) 68|ux.8jcͽ,i+E E^Enp ~ rՉPX 9N?`RY$PNJ!At)r-OY?Qm#GEȧôwI\0p|5%E߷jٲ7˔-/-])XOj4o;bzEVR,[6Q/ʛ^M^xc< F0ƩU#Heiq"9,$W-.'ҎC w6 s\F?>QTR.0@]}^*A25V"Z9Ls9Emqb*{й>Z(H5yϑޅ!p1$$# w 捎񤤠@E'+(Dsy jõqP(I!F"t,10ZgM&h-Y^xmL(Re.r@(X87*J$jCs93y^.i-p,QNvGtB7BR)8:) -)0mƜ|p% vŌ\-WGϥ1s1 W@N&imy<;ա/Ƙ%4!It$Xv\"s-'>e.'D%NqqV@}y$KU1Ar-jIvh'V` oTx%,qyav zI$]80j.=hOVs*EF0TPHe& 1 ^9+*H uBV6ZJг=$)NH]P$P !=#:jgXkc#KNFCJӵޕ jhBkLpώ.pd/b{Ӷ',[DdWZ5tZ6WMDV.RSֲ X,=6 bƂX4D& M<h~H\G*Sxd^@8 I՜W}TJ'D'ǖdGtKvDO~$ ?"Hi=V"D#&s*aBRň16τ-3zVn*կJQjKB-UB hBd2uLii W</$t:D!LJ+s|?0iV%"U$-'!juw<_OKo=xKDbhϘ'."Od$穔NQGHyr^a zlaY5n'hFA)0^<oQBhL*0gT*W*P-W>c"&j$'gJZvvTGry zCFH] &cpEy Oޟ IxhRs 'FzI-AwQ@EJksGDpLY@UVR IHL>p0F_Mjf%C&C)*f<_*ـ6g6" BbOu cp4Pο}'{Iy39j:v{4len\b )4b_IgYP s@5P cx}[.Ƴ iozvNZuN T;|еYJ{t.dc3$Ǟxh⡁h~Ҋ <r-Ù,AW_]&Lu08.Go9fGf$۪puUcj%W/8%Ղ&|Ś˫ףнܐ4i(Ͼ_ՂXo!u*Yi*5Xո[57- yUy@/{ӱ} s5LCɤul%?O{iU$-2il'gGA;ZA79{lffVI"46Ao:yO|q7g55gudC7ߓ`4}m`ܸgQ+kzor3ަ_> ̊ewo/t4wߛ&;eV,DՌvf՚f3g꭪~vl7OeiD% >P^i^zqGQd.E'%Ğ_P/x7MZ/))kLhbQ*ŔOhQSkXdcXtZMbt(NopzxƫvJUu~~UgVr+;&owa*v& ;5Cֱ8:ApG y# {ilrZ :Ͻ:I8׃f gڻ$5rJH+g4(IYTDיK'?h 2RtqI)us7'N(d> \kY ;U)RkxE BB)`tm`!3տ ; c;&.{A6oi&Q/-0[+7?VmSpxHF*Oα#_x]I)rkDkJArDGHtB*,ъRX~ I95Y=4tW\_0k.G;M;>?iv(qfV]("Aթ-,:Z"El89yW3Y ;Y7Fhg>=Gg_=jsx>❭ Z+096ٶ:)Gi!mP/[B<(;Ƴ[(V**$og Gǃ88%q|z]O~xݫzu<޻ ܫс[Mǫ_iUijoѴMӶEVrGG+h a\Y[__>|a?'t~YDiOw&YB \p fg^ Ϋh{N_^<B,1h@%Ψ뷦.B_Jd%O`U`JIm*;CP @hRcqX`^Ӫp׌(M#Dh,&"3p.:4*kRvTȵj2c5)\;FxO;n 5zarIbQpOF4xuhF\M) `^^ M1&$ g4ZUrήv#Yݲ,}N.ILc d?G^=5xY{i-A7>sqϳ ɀbp5(1BVS >˾z2iKiZwΰg8tF)+oOZq4\4MrsٛoӨߵ׮#N`^уEɝxU]wOg.RVl:і?m*)P*,::f~dur|k'|s ijk@{te'DO^vW\w{rީ//4{0.9b?)LcĤZ-i<# kmv2wе#6tVfk9BGwY3A&,$HEHi& \YQIHLF3;TCN'ǹvsd>Tֻ6 Sq,0EzRպYY@/i:W]>{ _9^2Næhm5vEoY>e--_Yfߜ~ &$P6a M<ل`yY,C!ބXxL,vh1v{S_ꛐI FST1S_x!$}Q r9e**bqR+.r|d!z ]-V{ڑP\Vf:CcoL+Ʋx޺McN4F^'0UG㫅.JWIT_6G_^{1^*__(/y/8bLjBէ)e,~BKwA?dz2Kmd`2%9sp`=>8LL+l2fp '$k,9dϩ>ʃddͽ!ۙsdT(S!d5!ʤl$Â3cn2|JZQW 0qEfd.3=w*hP;[SC=#7tl}y)Ir01P~EfPs@IDs7CBCUEM%פŤ22"+^# !DlպYQI1r2 GC\|9_dǻ_ݯdF/ub׽KYԝdr*6T,d!!o?*ifG&!ZH4?et@@OrZNT&]Fqˆ$ښC`K[YrAГb90]IrKs-=nL Uug32*հd슅'M?fܑabbeΘwO@ƟΆf])'d%x"¢fU*9΂!ҕ#uUzk}uQK+3UvBph2v̺,%pUf7bWfդc'ԾS]=]'-2$QGiMD-F-]WB)+Il%ļ0ܹx6>8L̐YQ&CXtQ$a&J@FfE:JFu2yl֨_d|*x(XM>ww`[AD=zcr^Km4`ZZ\NHjY2[i!^s3 mH6\  Is#ΌdI 0QX;UlFO:.Nr^g5)Ee\=. p-KJ/dL*#`0!`%O2D ОH#{*'֒C-:L}Gnt֍bvr\HN 43*"q+uJ5Bk1"!f^A׵Zn f(M.3!FI1ZDcf#0 cPѲh"L2Bf\vntz,2#Iut7Ne,3⑦30 Z 'vtZ`b|6_4["=-$Nɨr d"si2LRYusަ R6s L?d2:gk!594 ɇqcA@z3YGV+cCXr0΁~BBf5$ }&6>8U!Crg:l[1Er!YfQL/Y̐*L\t}r#nLer-/<_*0Xg0# Ս~TIHIh5{i0[+ YCJsFMl35.M+*~E=hBQMo%\^,-f`~jXr|O^!z";R|UNP.7Cn 2I*N{"d_c/Upgt"%J^dpRLe`\f2t? niyg<&g"DZ-_ 4ގH r0#TcQ( MpìknT99]Lq}e xdmssPTAوPeJ dzŋ^^K["(/2&M&@Z!eI`Yq\8- o]rC] d#"r,W G,I(RN).pQS*Cl=܍ղ'R]oP*GeyA>vgY$ݙ#DCBq )"O~G*S躏gEOnd?C;ZGF'{F溴>&C Ȑ[ DY':gL?({ `UԢCg5C!C:Ue7 ;ھR / 3F-H1%d$ D;`L d7(ՠs]9vEAMKmpn<7?zUG+5%:57Ț4`iֵ2CMx3f[oM=r3vShyl-sfqJ'9cYў'DjX TNj"juiy><(!*k`xnycj)5RHDbt.˃2]dD8* A a3^@ϊavVRӍ96н=Ι˫2r4 ~:Cj!'#Osd]`Ϳ WTbၱXhT V5bK1VBYvI:Xdz\nއwq!bBwZDD\PT{lGװG)# E 5\7 ǃ ?~NJ+$nR|!?b%PP fe1*$PkXe$,)s[ۈloUM" hNjy'wPYo3y"~9f{}o.gu1FrQ4\,שZ\((U$Ȕ2d+MJXi G,x'$x."Ghfm1 +MGa#r6ȹM:2,"1rbrX+"D46~3t17CzSӚa6oy yv{.@+y|dDS0 %ܤY{N)xa> ޜhHySj-y^<]jl_a ТL]ɳ4:~/Smtϡ =ZYK0X`"k"QKiJĨ6xΨjgAlggAKҖI}5x3y #AXrPQ@9 (G(rPQ@/("4Q@y[&G(rPQ@b615T( 7sPQ@9 (G(rPzF")jL>_*o;bLK[\!#C >ɔ7y+ME* g<9F!oRSC-qDbKǝ@S\ b78%Zfk\!ap 2 a2HҎY |  kYÎِ;Mc vİKxVFܞ]_oylt'V|-AxcX|MnE:C-ES SD1ӚjH1P2Ll4H:\(\w<ȵf&*#@cRY@Pfi%1`f $opR@qB/T9W2ceOm h7hu(sJRAu2c;"H<3\,V(粶?Ҵ!9w O&󴚣W}?Iz)!}N>Wޛ8Ptү?I`%p<;c?"-GXW MJ?m9Lu p>r6u!# & i!RG&d@9Ё&2zK0Ře{D bFpK ugޤ `ruUކόƃްRWAJՉVK.<^'q=Mthݶ=SJKiw"B8%t('KBP*ZU@,.X)8x C/a@-m=3{Z0 ^0hL ZU>mL"0> Qw3 |]u!9^d@Z`061dDTQex+fK} `i/`K,2Pl."%ԗL)@vOIсYup9٭dCM DfRfjH0TsQbY]!j=1m V[aYW([X~$"fv6PA$@/6 2Hc@8;!,I;A.F2?lukTw@ƌd5+p{}M6,xmSTsn[2w}">*6~izC٩ [ -.gi ,jFfrv27PxcS/.+9)O-GҿK=.˝BNrZIha$YPX@x!I Nj4v#`A)ıHґ^q88U_|&[)aY1yY䕇87z[V>C90i1*cBZI4rBTLχ';E"HtҊNp!(zn,! TBHt%xZ)3J* J/s;ui=όYۤ fhgw/oO;f?w.c/bKw 2]ux ػ>hڞ((.,Ifo^HNyR)VB :L`ȣ!iְ+nYI]QFGυV()5k/t&@اS Rpc)2\kq/@r}1 2u0WWfVE6{W\=#ʪQ_y!s⒯jRiEbS>\Jث*wiE"2$8*mAX 3uV瑣( E'T6F)D;MM.x뜡DB0*Q(MnpR9& Oi/Ap<^Zk(ʡW_ ]=Wa⭭ r^j?i޹T[י'aQx'gN_An6˂Bruf#\=;Dק㭟۟KWMYm~uX’UkWިyY[-j^)W7H6oc!mjzGŭ.r##+ސqi/nfWWd!>&|aa?TU<"]$j<3D8ÄIq q*KC!'Qe}ǯc3!G HF  ZpD)L.$,Sp"] ڻ5Q#Q$gAO!|;b\p BWh_swG': 8>J{(ꕔ8Pz M>٩Lâ|UɃ5q*@8jQZSteȉxE(XjBHΑ&S}VV!ǂŮX[0mB*͌l3R Y}\).3?ϞгoO˺7*;MOlČ윐z/ RBѠwK0n(؅=BVkYM!'>`ce8Ipb&&10D*=~Rpfl7s_Xu܋aQMڽuDzd.RdtR@L)Q^rM71E&8z+DbƔ$iA#d )`yT5!b_di/qD I`Š:*[,bp:_HwfbF,f{?fgˆ`J)ј5p/9+P[MJ E>G2^|c)!RQ&IEyPR\0eTq3=F ϛJPT ݌sWg̥},f%EViϋ=/n؏$(4'A$[& 8v t8dHE'`L2Jsܔg?>S#~&_|'Tb㢲"X+"*dZ[9 SgA>D|={j).dR\,B"B"TY(ji!b國NqlYH\)+.hCL$҄!zlwI4GU:Jgũh^9 {W;M8̫}ZVbq^[J\i9t{虼#=#Ftp_*$8JSR&s*FR)xzEr]5ou^` hZ/е@{*]e(hTjY+`{Ziq Cm>n)nd_3ؖmj _8}sМ񘋼9x>'|C?dz0`!v9Ƈ w> 1v<%et/jdpDÔGY=F\sw{p7Osn ~?G:~m=-z'[_W7rѢ憖:o>M:h;x֐1`wFcl ! T"|z l:<>&WO3<-R~$əѕT\pBSK4Vy?~;N=v{ /  iY ^[L!1\m$k.s!3y?FC&+-МX(Yz#u6*kw1!,>`7z)<]~ZJsg=8zf| ׹>*(D.^[MyV)HL)ĄN}/}_YkG !;Cw)* lTsn ݬ/ռpY :$Ɠ5r-Ub C*D${ߝ聾V=]͢~GF'u Y·.. +=|;6vd M'|?Y[h8NiΎ߾5Se:wCś2_.0Y>ak@30'r3cx-KP;?C?9]ѵdɊ+ptOÑw8jqIgYxsO44ZwotVk K 8(j":3Z݇6M-:u24NѺ[0-m8rrmn$V70 j[ծYܢvAe+/N b ti^HV6oX&XW0;T U.Vץռ͉;mUV{cбa}K5Ii@l)"K~1 {i]&$-6ijgc29%=%F?%oX x8=3C#ChՑS.|#tagז nۂ UL[y@[湅zeeX&lys~1NŖ.vNDٌ~澳5[iNj~h )뵉^ |*kC|ƔR J5J#>8g]L,K,BKe_u{)N:Eƈ4M,aZ):-W \)ɢL+eH/[O',怦R{U.7^ ^y tIpC茘C +b.Q*^ճCT*t`Şv˟vC\t}+]ꑬR }#'?9=ek=Y;95PjcmbR7;~}<(uOW\uie u&,{ĐQ׹@j'R>wNp'KW;\=]Ұ=t{W\; d[n߄`;vk$' N&8d԰ZLTΓ6yfړd6夿jJ W40Ƭl0z`ǜMFqGUӡ+-@ LuȓYGnأ2᫡+vR; fcG5,`eX_xib`x erkݯoLPzkmE.@GaM-hX`7Ag,z8qpؖl<ѤkFGwC2Q \"MkJd{4.pݜ;u=9IלOgo }떥V0#*(`FBt=ZV֜}D֭c`\b҅V;sb#Vՠ0(XppsZHhhBBP7pS BQr`Nu>Wl"ZwHWR sl*e٨]])I]`qeD%_gN(est54'U Ot(9殾b;6=#P U-yU=U:FPԲBX b]X'QSՂp6tJv@.x*\芀E6t ]%WW%ʎYo},]%ZgCW` ]%:9J::Bb]%fCW /VVжUB)qGWHWXeہUSA}"Tl݁{YF4 )χ\sP0Bv SGBi.sZb' BW -oL(E"zt0"'' OGQW -mm0ut,t1,[}ֻ EWVUBEGW_ ]ӝzfCpԃKQ-gz(E.V5wtwk*˩U-rZE]N`2|r*'k/bT͛-Xa2j HW\jTK(1J5`'S- l\s޳L(iGWHWpFt ]%RBW mRˎ'I]%:y{Q6`By*$#+Nr*:Eèq̊nNjA4H=eCWlT%'vN()hiSZ9,wuWrIR-3e\ҽZl3HD޲2\9ӌ#erBz˯ЭeTz3twD?5uh|\l@IbEmSu.ؒAsKE> 7 UBI PJ1ҕj]`UD.t vJ(%JS!iNY` ͅZ,NW eѕرEQ%U-gՄ{В([v}AW}3(Έ`+;PlPRaObn0:JpȅZNWcU9q24uMq\FsUB);uutŘ8'J̆\%s+@P8rrdvwhS.N+Eڻrf-1`qC_P-%m鄒uiZpQNt|Te^U&݊Qҕ ެ`Wf3gЪJHGWGHWKNhFtk ]\A Hhm;]%;F‚eDWe4w ]% %CWrǦ}$(‡Xj[%7lAW} ʈ`ʅmIut,tEw ?9#oFgֆhGWw:KibB=E2GϻdS &,m*J;=FU6rJrƽOpW'n=L2HXgW'?-.'_~8: 3sgl޾۷r7ߧ'/I?Bl *X0B`D$+h^7"a8P@垬Mzҿ˷m3eKm&5īmM$h/o{Lp} pqgχ|ޕ%X=ḽ~&O9W*bߌ*~6 nWI#%T\*uT8 yӱL.OWeb"XZYh6S)ᤶO.zTL8X4L_t&h0g*'heuo;έ|4^4?hgWlWM<}9 u)njWܐƥkr^ Wӿn,˞ܾ{r9p8Yzo[L/F}4ӌn>.ï(Rc$}_`-Sy #@Rk@jA"KvJDajHxM}wA-KS^e쫯:2h >m@o_ܣo!U#3\x(O{KpܿekCuc5pN82 b #j gb4M ͆jj`1xUEs̑>~;!_-Ͽ %](aYRiKkc"b RSV7~4=)xfu KX&8!>=ND/HkA$#g9.2>FaiT#R1G EέRȰ, Ƹ#aREbشFl mRjxt*iJ񹕗ӘfU׿P}LDuV6;˴/@TO˗[NU|Y/]IHLW5%1wrs:rlo,Zف7o"Z.] uM> fg]W\LƟUDZxV+{ޓh߯^MHdˆ^FcD2Y+냉KM-a$EV 1XUj V~Xdbu[S7K[,Wi3\~Cv쵲2aDD&p TQ͍QmQ'(߄of͹҃nz,IMsa`ɩj܅y8wMD - Z!S&S=mMv_6v n,9%j#DX:$H@9[=e署DKF:0 ܁@Pc(,<DXP8zC1d^upƭQ&f6L^G*Yoܫ? %?5cz)uHهiϋ 'KKxۃnã>a/43MX+pҴ "aA J1u}3s;Og@0o%P0 l F.C(\:$U W*ApT_a>QgW[Oc(|*N>ޅ)dYKP߮~[WrQA&\q$NfidJ}SD`Ɗ3B*w%Z[rœ/糳l͞σQ>-j뚭L~ʙWgfa}ڒtԭ919mِ,b7fS57 }KLFhxuq5WuUmn++Ctq{Q4MUIs0Ob/>3wX,%*KTMc`xs30?ӛ߿7/o?_¨'Yp00BHwW?Ț75ʚ*K֪R/+||i[}8ҡZv J~~9 .m3z5>GZl3+$ѕz{~ |}ˆ[&Q6*ŵs_R }Zι>c0^1njZ~yB?I"q~34)\1[92hZ3A3cD@_px eIgXzͰ溎$zKj&sd ZYhqA2q[."e^i1#U2n]ui/i[#f54f73 r0Ý:ugǬ~} xVSYՅ&t'?0FPwa[sc@ lr1 h/9vR0|%gS4aw{sϓm<{~'ȼSD@ArC!faG: lp6HF_jRz\Tkn`SIRCnogw}χ$tzq"|tFFO@>j6Trsۣ_?FGO-%a4=돭ill8[TE-ꁳB* {NRHRQ4m~Cqga謧NNn}0ru!#x(]aZI'DDZz[)Ƅ((3L0JP:=8碣D;{x$Gƾd4E3D`> . ,l*nnt0ʉrϹ{%Fwj$ZZU]ҕgL&)-<r8 1²F{p-Xhm **h2:Zpy*WNp 5ЩC{Fx=~EYȳd"yM{%`Jgh(B*\8 ,ZPB>Aم|m'D]4;]mo9+?c;s`ws_Ʋd'+vdqKMٲtSjU|X|*A JMM" 8K4@ ,G21HDq]ħ@YѕMQN8eمx* @;Jcrސs`}n8I8B{-Ƒ0=juMsCp1=^}&*68^\eTO59t[Q+%Df쏯,qq%|׺2*3kaKz { Kr@9"6rTB2w Ȣ$0 681,@QFSB^^j}^|XSvֺXBh)`mK3ȃbi6O;b9 R ꕭQ-h8 =[ѳc+- R0*Ă#: kFR)5Αd"<%aGM 燶c^>Wz#3e a( 3oaF`e]B[Y`)|^+)u9BbAG⹈h>ӎD2j'8Xd+n9 n#E~[~XQ@[+zA}8,>rLP374:¶48;ib{ɻxW7tp(xp1]?N`-.G4,4O+Mo;Gg/jyw0og1o~x?h/Z^Fxi7k\WNganÿÿzmuZ 8SX~)QMK'U3W_yqv#ċH?ns!z~ţ߬|B*FTA4R9&  Is U')RtyBhKIn>!IuBO %P*!d`$PO1&dǶJts G.mTKt83;N:,B}JxyqIhdIȴqREԞSA\,,$ͥ^Y L } {KDQRYLgT6bv]swTGT_LwGj6/kpqr=_EݙYR2v޿tQ!m-V{.]^ 1^w4[Wغ~ܺjZos<(h跏f!e6|xw=>2fw˻wgΣ_;r\7V\Yjllc%?a\uK<7&ߜ웻ظnT/Mxf^?Ģ %Q(RiU%S%K!1zGt[Ts8'4i uƧ,'%2!iNExE#.{idxxײjl^v|LhN5{_DxRI%M 4?8Q 2b 5 2KAxT͸dV*]kbBaE-&nEYLfNPP=[_Tr$Y}ԃETiT`Fhn-%U8>bx!;{.Wuq&GऄW"jfNcU = )%n'b.6;d7+ڶ=6M4DU R.B(D)j!(n( 0x1M-kTo$!#2AȀ8 $QGѩJׯ,&n{X;Kl"Ga^?E$EuJBԺ˽ęajh2Sm %Di1HMJS`)}v0(9=i&4w.VJڣ]/3)Yl<.vv[U; Q;f82q!k:rSAvvhpbq,l0aӻ_6Jsܖg?>S#BoݾPIE4;lJ\NΘJ"+JHa$hNY TT\쩩ؓQq1&xHhM&p Tx 4Ayk.+[pw"GcBM!b߂6*bP B2*Z驎7tQ)q(R~ 鈎y5ʻ.růh˵sq3#Q11N@u;Wh$`Q}(iH9pV!FU7]UrmNէ^<7Wz ]h=A׽:+`* ?G…z e-ŶGZτLGZՌWղ(,E*άdћG*4FG =ʗe/V%ѡqt@eK"(7HqD0IB`ɥm$ OԔ=<*% i!)\Q[.JYŜ` j"Q#̻PMgotLNjԏw v۹sSD<aw1~gӗϲ2ϧg7 81B|ARjeuJz@ȓX1Kr;{g0 HVDR:a4ZhV Td2XCe"qb40RQ&&Q99S1;OR@ %'( V0gJt({ pϜvjH>|كqr ϶[f(R㫎_Us1Wnx$tLj HY##PI[CRr1fy Roxl%Y QkR>gT&- ss=DDx D<0sg#1Ak-NjzIAw@Hp "Б#g HT)EH84bZXCY7E?J v u/PTڳ.^*aijo^wV\ILL'gBpOaNLPhh*!B;"Pi3-m#[v! ,&u d: ?]7c}b'2 &\30xQMN4]l~\u|y0ļBsSg+Fw ͂b_8 >NpD1I-s w6\|ofJ9AiS0j+zFg>l}[z7]ѻ$rZ _tvWGf[;ֶlzWe>-mJtu+=]+Ww|m]i5{`NؒK{{.W[.w ]=~+O$-3i_>LWJG٫+(m'`V20sv_,Bn;lm.\UowmR-1:hx{q:6q?m{̙f5;=r{V[vqS6g!JD3hC5-W4"lo橌Rx킱1D3yrQϘ&WiaSeU}ԥH|~dYBe *+K)kJV&mhb ,3*AJ!$\0`*pw <C:永MœRsW* V3]NwQʧ't4fW<9}pɬ}7Q6X'wh2Ln$`.E޶\Lf׏X ajx1G/g8qxGI.>NJ*oNLջfv^7YH@OYP ( h=⽻4myRmx5{wM TmZkw^oK@ctC]>`E-Tr67->`Mr Ew;}38#O0KU_ dfH2`D,yuɌ>IlP#16EOO_Gk9W{e9pvw~ eYUu=Ӣ`h("E,]BEL q\Tq?=*zݎR,{됽)39˵@?nMK9|obʱ%C5z.ؚXWˑUl. I%(兪k{?_ E|:>@{~:@r՟ӰF|5ǧ.%gzEZ7yc`ealIs>ѕc`gvk5hrʤC5PbJUY )jJf\ѪUu1v >ٚE#YOw>brs&QV1J6ׂif軶s6Eg&ѢkC};;y?K3uZysd6CIqiJM#$B"ӍkPc2c1N#1;76{PtPQ-^rJxh`H὾LD ۘR:Wvq%cM":IF1T`&rih*|/4N3( QU8F [wpT$v$v*6-Zڋd3<%cNBk`'9ϛUE{r'gj*uϬs!/Tw$ף(]G@15wnE;If)0Ǒ֑ ~B_oFl|9* X )$ڨjC_"$\m!jT&b>iyRl U} ɨ\S>Ec=uk,ZadgEʥ: VAfYt7H!Q!S( ٥fݑ#6*yˌBi")K>#(<ݫ N7h NAkah:^vζiPTJҸ#첫+e$XtyXBs+]S`1uV6 ]K ƺz2?s%Ps l[([zb4o:dZ{jlPѦ J(}5 EɷV>SX|.H1?'؎~okҹZe R5RcM l`WP&q$8NqN +QIw4 V2TS j,8I9L,(6@@;-V(!ȮhYc@+!7CA 2nP(SP|P BHPED&T4g"!n2֜ANŜY' sG4A D7)h 3*)8:P%@ irPfdcBQ AP{S e*3RPHq Ls$eIj fYRH( ;:=V qJ7ՕHޫ0I]l-UuA1 f1r 9~2%FjeR"2%5sMȲָFҰ& =6.t4-dPg9|_.ѭnz1#.UUquHNoT U(6ʐvNZ'D̿ {06s?` rruqק 9y] nTAz|϶e&'@ZI|txḰK<nllrlfѓE:4%mPIHu53%8O)'ޮ YWIB_NGзUfĞL:5n(!/[trM1WdsC<܈QD{;D+1-TP=`C(uFAJxSbFf`kAz[2y(y+ VW XW TN1֮ ur3pMr)E bX0j@)'ƌƈzu,1QB;L zށ U @*c6*RiZwf:hNhc띵37+Vj $ƚ5AĦUJmxfҼNdU3BBvt:] vKicKl #url'ƥ߱q郡ΨHDWLp pBW@ J.e-t8Td ]8p@s+d^ ]Y`>[%ևBW@=wJ|tE$Vgg_:Zo+jz q~j,tQ~nꤝ5h!~dn;@,hDzw{G@ve'mj ~?e:z s7[!`g<>fmL]:ev"Vlʺ5=#K&ˈ|ڿNY8:F.ggߏ5I\|pSN܈~D[cgA1k;lqg;l<Nu]JpcfSI]ũj}oyعXU5d6t.->1c\'kU,> n.<Y~pIo鲗k#x\dEDލl>?]g79.N[^Zen9B{N4P>zZ\g%I'.9 Qx7;t4lXO(0`$V}R*8!3Fs-@C>EikE[ M7SttZK^1X\/"c;·)F"1*BCuFxHN*Rar|5 ">YMǕBrRP2k,FGD  >j^CjtƤ?Laz]$mH#? 5o0 6AnIT0n -{gyr:p?IϿ=6,k~|$vx=U8ld33(EL<d:C3x}[͟-5/: 'aBMSwˎMnp%&TV{k=5s!Z~nP?fv9|mF{?gݗ0 r{?+hßnf5.v? BZZBEmmI-]5ڛi66:U>Fj#G1n16ٶI*#[]tն 9Y:p34ٛ{MBf+r)=MV,WUpNT6{?_>_~>Qf>?|Οpfpa8iA=[%QsMs#iڔ|hװ]niH~o*j+k@taKFz[u?uMAq h#Wc| !W}>:x^O2gD}~ J\BPiabHɕ+bZ-` uA.ɫj}t?YvB9LP)o,='hFx,8s+2?87XsFȌKfK%taFCuLav3~3E6ߧ͉X;ٿOY=ya{NcA+vy:MB]6oi8aAG=WKg$)x).80=]|9`.Sj6R*1`$VVKQc`3cx{JtGִ=DL؜ol"vzɴw۟]n6g}}l]3ay*6͟I=%pY 6ۍ;$kr R,S_.wF7XtJ0\0Ͱ\^ oGImwmy7a |ɲo!܃GHo]u,JЧ:Emgf BX j`mTބp4.gyס-' k^T tڰiTofU\1q 7O ŗ\OǓ]wblC)3_vgVň25,HގVo<GQ- cU2s%jgoY_OFYjW]>{fG#Vŧ}-~EYg12d}7 Y\k #и‹cgdR212| Z/c үYd/jZ E+&@ec𕐄VZI"yHz9##^(96΃F8| NǠMBJM?O4D\283iPRY bvh<>C^D ҟYw+ǹ(\u!eG80ӫ~׽2Zul讂ν~#hhq:^X"%1DҀJhhRF t: H)_7N fi 2) S k8{,(8C #(G`trll`(/K˦+ሯ8x NWC$h! =+잕^(rTA4R9&  I 3^')RQ!H+ RR :l!IuB(J Sk )u4:`7wN=Bt*+cZgxG"xsq5|wYGŶw2ݼv獠lT{n~sW[uI-="$?ޯWo25E#N4|0hvFg'VYI$! \ep:ųj$VFz!RVJ]@20 Hy^z[%&j0 :<['%,ڥ@@ -eNj QG!E޳u}Dk+d(}+[ϫB0:Ur=V¾@6T1I, }*{k>JPC* TIj}$6 aAc=R 3$p &8\罥Behu(KYJ{ 9qclrWsзGT_vG4k6.Z{TJj}ܢA2*գӯC:[Ef"z Wk2)c(4t\jJos8Q﷏f]!d5Wmx|_xw 5>2 h{}>3C/ʬꎊ^-\4\tQttzPyc{ :Ԕ+[盿o);hkn8ZFVrYJ:oUaVH_&b39ܠƳL0zBITEU +xeaFIRH.-FG*UuHXHLXg| }rJQb-V" Mmcc>&+qUe]C;~\?;?PQ%K8HVFE `F቎J%|4I: ,Ѐp%7< ֣u $KţfDZY|_<.mIp]%k[K|yćTx2Z笵 D3 -R 9ŷ˂|W{-DڌXB F<$&J*S|hdvpy5ޕhw"GcBM!bß6*bP B2*Z驎7tQ)q!(.rk–vqi\VsK+w.ޕ6q$ٿҁ/+F0vǶfF^²8uh in\$Ad@Qbկ2_e֑YGƔ.41@q}ʁ$N45i"HrJ͙l$mDcUUAP ֫ wfu'FAPf(y8l8Aq`YZ{7nifqg /ȉ1o" ZPUΝ9V[i+Ew!_jx?}e@z@"br<*NGNG\a)J)}# )j #aR/1",D _Dꥦ0+Cʘt3_/_A鐎?pwL?fI@eݡ.> `gӈ_>ږ, >?~t]d )Q:1VFj2F-ɓqpӎfd:b1k1z4c8({Q*0YSKp3"اk&A9);Ŝ(a014gJp(;K&`ɻ 5+w](8?pGFWk3gb&ϜxZEpL0!&Ub ևR -Ea"%F)-u g-tܘqКi=Q2*8`IcΏFot$sݸakk|ޞghslT*N8bD,"F"*vZߘ/&;_hpH}2~ l^jē,nj9,GGBRsƨEE)!# !vacO]G 5O6?"y pE$_"mneyގ|y"{&8.γ : SFMzk*t}0Hq&*})n6|{  h߀<_T?.KNӳK{m9-LŧӾI_(`h1+ҙ,1_'+fk*WLX+ PX#@Fe~R2G3Un }->8v~Ho!N6_+nTz`b9җ:Np$ ‹֮ͽT׺.(ƫ+ZQ/ُP]"X(v+\ myenu].﷚w:wlŏ{}}ʋ3&Y$cm&rݍ|ɦk5=('v H Z`:yȪw4LCO7s5 xpU>v!!f14F<71/ )o]bI?ĤNaY[NS+K}lF7[ٚf#g *Pdv7JNr2O3p Z9GU*Xh5n֥Ɂ"oPCKe ,9LPS+Hy G&Js1X "#Q) Jޭ6CzI]<'azyq0Ogj."bcJ[.=E刌:G ;hWWGw<| kyN΋󧶴&o ώx35XQbZTHYB9guNyQz;$wcڔF3ˏ<͋&mX+no;fO ?)N|:/Fs1^ǜ9 ^Lp!TH#S:y1/0}ûZT,0f쵴3{ 1Z\PvqV]/AMtW7ZAW Vm+@1?tJ(ҕR!"RT:]J"eGW/U,Dk*-tz,x<]%CWzîGH#j;bsWdtZP*ЕނtGWzLj 28rjŨdA~ 2E -WQ.\͠ZeeV{٫GϗZZ^JYMv ~]/ܽ]Fڼ' ɒ(|*hoS*L0e7]T=v_M{ѫt0QU (_TJT ePǿ.RRZԡeyZ[zq(Hz=(+ʘ&OT2\Z![Ȳ8gB'o?qqD✧%Wfo;Qb%=;?_̖mU׮ZSV+ZMmePSuq"S c[c%\TKh=tS-<4ϲ3՞T#] ht9孡ɶUB+ءUB)uGW/V=UKU[**HW!&Q s$[CW .Amv{8OW %IW-/3._>KS ɿZ8vH@y`2$,CcDӘoq1КCca/~$P4Zzs6]NòlbF_>4yeg:׬Bt,O9p kz̒L@H]ȓ?)c0>^>Q@1'02=W +nqOk4P1ds5b0԰&_G. ̛liPfQO>rS3G*WzwߝLrJ͙"<3;˜|⥉݈NN6w+r.Ky$"kY"MdXɏ"̺sUt'ע&tvy LqYrtnՠÜXLjA=|lhԦgj!ZHhO-$[yS q۴p3Jp9k ]%nJ̈́DR0ڦ]| jρWH:]%wtJEtO:]%5t?Pκz6tEf]OJo:xWwՖp~wmVɽv([ڜg;zHctc FZCW .mVCRᎮ^ ]B-+L n ]%ZNJA:ztEMfO6EW .nuR~tP2 +Uyk*-t*vtPj +N^Zh?#]U\+~1WG)}՚#]ԛPh [C WDD+~.THӜ%W\HWa魗-f?陀a##h9c" PHEH"C!OVP,64&dZvpB;}$+)Wf5t,$FJ;ztbMt7n UB{UBɻIW%geXi8_,Œ'I,f Wg'UqPOrW|QkfLĘ(hAV9wZ[[oMV3pzK}6.0##<-ƲŒ'Au/quV+ڿYљ9mRmן*||gӌsChˆ,MW w_S\EFvw6+2y)ؗW.~;b&>iis˥xdqq&J^4O8>nr5ǔ_~O0/y치O:߻l߂o::, *30`֝˛Z eXƃ8(b3G$6t 䩌AqbG{e8A΁^4_*݂>frȯA)҅1Q!L|H X^;f2Rʃp@Gg`jg8I(RT{^#$l:q<)D}iRq Xu44r>;Y||r4fq~{y swLͳ,`{\aH‰M\iK*' Yn_[-+%KtERzQ&RY(XOLk\4XJa"$ "<ޣs0Ǘd.nx]*!U1Tx`1dNF03,*0ƜVc 1T' L{5 m@QkT?$eeVM_ !h`)I!: c;"H<3Эb:ߋX|?д"9w O&Fm#GEXwZf;d M,2Dtj9֋eGoێt8"*bF@Y IZDRB$F ˠ\.94yU=ar1g?}U}{^j]YMm}ÙeA89H4YH.twFIie_N#oi>ڜk`{u¼yQ߃5sB8hm֎̟qIh ="4rЦEhyqo[8h46ª?~<7d qἹNb<~k_^L yRRmHeYyd'BpsgΝ_wd_;Vfz6oO>*j@lsB>D=[\ sC$1paѬׄm +g~NيSs\¯LJ8U-<↧Gr5 0]ҷ)3,876-EG4qLJ&֖w=]F͂ǁlˊG3,#6pY཰ 2hg.1EHFrzsEQl5:^;$w RB\kjA74ͨ\Y cjF׻ig۾wz7; n'HQ`vl+z,y"NuFmf݄ s+&i`E[j; |H??.?e B8[M5(6@ hZJ;H |HmAK/Ha!M{ `JD+LF3ײ[5qt7Od:xsB<=jVta!ħn:[Qve6Н.#Vjv1/XGVZ=j #\'NCN3AG N4UL$0plU6N'Yzl#9rɦOJ#-kpk8 xLo0ꜢAN.BPR(N4%ks&*v(5*Bq@ BUHj/vDvj[.@Ε>rBcZϽ0ǻ#.>n 熱8>Ʈͳ-0.~&*u-lzm1ҡZXM/-YZAW`LTA獴; 4@؉Y{rp ea'd΄|t!!c g?^x87']͞nύI*k"QRIl0 lTPS4C8(d袽BNSNGSH)8H$sI@RКuCs$[$vQ&!aXPNO9TJI0KnNmXH,9giS-CzN)֫SM,vA&, "BBf< o`1,TIlɩ Ǡ#lӱұi213dЇzR&!:%ީMeg_kw:_Blzb߲G9TP}'dC:,{*^r=Ĵp;1VSseC>nܗi9q e:M5q͝i{SF?l{0Ur\ޮXr2Qs&N.s<̥4>7H4%!1mȑ;B^k`qeR6$JÂ㙱eoo1٧Jkȍ>ˣ4S1Ymh:f!~L6V5IfgNФŤZAyɗ $ %-YHtc-+jkWnS,M~SPyr먤 Om؞iتU I>GL4?et$T:匵@P&]FوvOkk2bh4m-fLO%GO&-<>j2&'[*fƱuE;9d6?<]qqp8?3[_ I9eV9fd YdpVn1.0O*IT|(ĊLէB|1.x1AK4ܡuKӄ2%~#\b[GY7b]g;k[LGc4@NAk1iM@-F-,@t)'܀sì1X2!CĢHl &E#(z̊tHu2+[g=lH2p1b['a_Edѣ7& )p>r6iX\N t$5,V U.\{'6qRRڀ@#y.v&`(S' FόĤ4>kBo/}Ձb\:%_gkX[.Dq *b92!1l.u9H $1SQwv)Ķfǩ#C> k1ElQa6G3f{t"ozAKiY P؊@%TR#Ve/2F. k=mn]H%CM!X(WQE*V%ϩ9 rB(߮+O-]vXt][o+=/ś`l=3 ^mdɑd'-ZdK,l)nXME._U%$0\T M)ϓVy/Dd"eKǽ%Ɛ @J CAkf\M6F}d_S3`bf5p:eBDfTuI0W]˞6ES&`8IAZp`DrNτ AIHB5r*]𲺑|>: @c-C>9k6xhV!I3T~ϴsL>1JڗBSŔETu(F.j["8ede#=v|h9Y4D:E.1 _JTq\~5#R&sJF"F dgEF?TEF}78CȜ3fοWloFpuضH뉐0j$'y1O&z:-ևÐw ^fbkmSq@‹/&Q_'oE¥GƳa+9 P&ih.ajV[jpaT9 Q.*$HY͔G}YYe#șΗ[|,~t8TgeZ7]dN >"ϖ姩gEjmw=Gշ_Vt~\?y_ENni1 /Dܼ㌊j7߁^=*pFppzyX?_[/5;\nZʤx9M HΌ.7t邳%bL8Bh[;0\w.T~#Kv&>B__ *(ӂ@P`uQ I $(Gy\ AJCLQBe80(3Y`jDzM4¤1r6 :K4߯*OiJ20^׭Z7[uA3}Gy_PŎϙxb63fk Q)rA6 AXBQ)9h'CFj5U O% *0^hx0Шe4&3FS!]P*H\@D mhf H(IaʝQ1xΒ5#g t([ \w: -!748ݭ(*u:jrz5_OGJntBǤJS%D:D2)Q["O/IrT'jO49Y"pz gx;O%J/I(E0LS>E'>+%YԒm\3J/sO(/*ySmUr'UɛŅ{^TuMD6sT Yiy9k&rLxDP3[JJE#B{:fZDRh L g*YN&EWy_AoCwٗt6M L9_T=6/G jPlAspKE* 8lq"*7Q9Q)6"5z'٥ۻ1l+&wx9wPB%oȿQIM:Ն:9ʪl<}*C0oЋ>kxm& )}ΧuOy/HHX"@)H Y! - .ޠBPƹ:t.'7 L$xZĜS)OMcȹ#.4>zQkBFeNfKӳ9s\z5ob>\!Ȩy{g~Uqp3&CGIx Bҙ KEd$i.>;{o.RuɇEQ*n;?<]I%>.a}+a}Kr->$ty`s*E `<$2bFXp\ ooڎkg=ݻKw-3mمCeuaƅ T. 8r)\\Jv׵h^.%Se[.Ka搪;e\RS*kRKwudUWP]qA=$u $9uUPUvX DU&XӃQW\s0 D컺TRުgfNlf:s Yc /O*i\&s}$5oo?o$ټ/^5QR{1gs4#deōV q=םj#WK!$HPAz$!3Psw*jg?h2G ѠA~TNTRH)Oyz=1yKHI63V TtuTէAts!ǻ_0A +NBikͬd3wnN6yAv; $Rr] ؐ|/̦zrXٍB ۹IV7"[X.A KH!_?"t΂9̩& >̙\Ih!SMҰh9-Hmtp|_ ?4?xJRJJ/Bm1qd~_܊}d%-`4gt;Zqc{TIZYgZ 2%RBdru >Q٬D:υfB! 1RptTbwxpV[8Q  0~MgW3~/ѤTʊ(iJil}~twӜ+e1'Y'øDQn5N8cZdR$SGbA.9CS:~P +M"l ,oU il)xv oTDD:J.(w߼ +C!~,xᩅu2Hn"<x&&O  hgp[QFG5iI?`Xi#A ڋaV@  ((JAC\ZAjVɌ-I%dPh"M *! (ID gfBl{%,cffg GLJ(]t{ا⢟|峀8.ߪSTuS})Y%70:G|0*}0}RTmo4YF'𩘼uMQQ,jtm%(hѩ9^(hj(LY?%r9j=!1 q9JK6#7\tIq(ի*?Vb_TWīV^%\ٗ*]t|w?*91LhAdgMޗMLf߉&6_/V|F'PPխI| >L?x=Ζ3Orss$h_/Յןm  uH tjzqaY#aAp'm{WӅ]=MNЕ2~$Wj@8'QGiJHUzv1B핊|ACh1\Hrr]yIgq矑??y7N'|g_ɨ.WDg6n]Y]C_)&tgꗛTq#yϟJhDc+$fKOmAR"D@G'Tht!T@"* '"( J~0Uye%=ÞWRgږɑ(y;a7/~CfݚVSZR}[݉"ۈ%J]31=lXUL0mrA2GGҡ`q2KFu bRдe]T=UexSq6nq<&Fa\0ӵڣ%=:ou0>dýO)ܒn5H9[I9_tb'”8(`BQG |oj(Ao>[tǀ|>N] J2w''w-ysW|7=8:_!4B9h>ΒC݂bCB0zpP:AT&<>ìĥNN}ڔ~/tSUxM YB:h;\u1L`jٙdvTf'kd4jm 0Y2k7"d*ʦ,bjgۑp9=_Ȏܧ/+#|q2=C|as{0{>1U)- bĕ JAYI6LT{tS]izpJ:u.U$Or(I^iZZ]1Slrgfn~g?_\\M^M*Y)sK^1e/YJgc(e5KVÇoq!Y^P+= Hլ0YJs )cD;&lBcL.9Tx&.Vz#CQS @& ⴠ;$$YCc=,g -!I[8v3A˸|4{r^Su?Xnޞ8}lB{Ǜ%_9,>N0ju?c5 UUŗpW:jWpy%m.qlT4??v*{c7/1ɓ^yt<i#ptl8pQY ܘ.l6%K- {ȀXʖS".R{Vi-MB.?!E[m;X=ע0-8`c4^,qd2jkђpy+GS:PnTk9vub]],UF$-#ZcS>wɚd:;q7sv;5ZY#rfgvڼgO7F;ٗO"}q.Uϝn2[rWT: :ln)44zp !Dc0Hf&nqGT'ӇQjHdef}..[bITR S2,9]L&J&yo|.lt^Fktzyq~FߏCl M )KO JkUQ#b^i7g2yx8ЩT\DGꬌ5 QHꀬx12>Gchk>uA0MF@V` d *>3&߁`9ۯ>fUHmjZU9fj}7ْ|e=ؼ,m_KŊںGyx2h|@uI{vyC߂;=ώR \V8`Fyx [ $WŠi\IOٙ%qF:h( >dK%U]RT'4J T`ҺNXﵑ:MYPXA@`pΓQQ'$`wNݏBǴmO Ƈˎ<cNmix'ɝgy!6`g'NhmpmX--޸\>A!뉳! lٹbbE|V] .)M <=7nc{ r+u7IPۦfro?ߵ2 fÝ\zlY1OP怒h4Uř| ^.?>.ly^fR왞'Yk9k44+mIOʕhYPV,kWQ$`~6Sw̹6/FT$_4"ʠ{CWMmcC~,Y9v3K?ƆU_O~[~5z<#!w}Z=YFdl N,+?/iOF>i{fg-{ǧN7|<6r~嬩@1)>]o` sH>I48 jP鄬Ru8ĤyL` _/FsВNrusX>;aY[$AZG6Kɥ_ (@A0:m)#&\Ѧ*X9ٜ<&t^f#K=-hήG. %fy,a{!b/뷶aڝ8|;+Ks0ԫQ񧾸ω&?ձGմE^ngwbG oe9[8uJgi'VxtƻOJVq[-F|sn1}^*n;NrٷzY[wzkoUj6^ps.΂L2]FV6M%T"og^| s=:쨴b %gdl۽*Ij4M)Ϙïs;"RwO:G IbJJI#V)KV;vQa1S;e;~ٷ׃"MX'ψR)G!P!M®ZɨP񀚨 V>{>qš8I4[J%Ǩ:;j7sv;jy?k?ycOJzJϴai|U]lcɕ+ _*@3j?z~Di'74EAIJrEnU)cVY\Tr1rҒT5-lZNapMX%SqYUE3ae3bK;wwD4E nbсC2+*jD֔J-[A+&C̄൓ YvE-Qb ES P(1JՋJR]b_͜pLL-0KD?""4Eğ%"*ʜcRQL22PtqE%u qJMP,W;V*D'qTCԖ3dl f(AG+y&I&m\9jGmloA^쎈_?y(qq\:nd߸h:E=).nE:XcEqb_mԶL8ƈlK %:.dIJ) S\r(X F^q+o 1Gd+<:ݪxKTڧc 43҅+GHY0lj] +.fΏ:ȱ*Y8BMb*"M c2;/&Rd%c='g9$$ ^ ^g7:Ъ!;tɷQA^IL :HA*#<8H >;//1hgɧ5}ވM0 n&^fb.*t<1x@G_b1: 0ȃ8fڞk\%=̳Iz*H(h_kZe5[.l"c[4lՒ) It%:#% r$HGRd콵9A͟;R53_mF~i3Z Ϫ_?1}ㄎ^%A`b~г1LE5qlYʭ+, *O#il}c;qI^#N_l}J(RaGYgxJ=bKawMOUSOt>18iDP~Ul6?s`GcC6-8$0I#,Bքpɀ7_zi`9yJD/ 1%v#541! %5JNf[8R>3F-SBF"o va h@Sޞlp"c~ػM ?쟟雮/Xw(o,{7Y*f /uڡ;/{v77?hE}Z"R~ uʙ<AIm((q&Kj#rWJk"[=S׻*qk!Y7Zח--z-zƹ f1T=ܓf+Z!Iп7 W;e$]((N,|J)[խ*tVuAS|Gz8٪w=:ݪucb9g^bF>bU hWp/5ĵJxZ`K 15=N%V5]|ƗUi5oRZ惘%,x_ kRzAW; :/q*% ]Q="r, bSNj"jMQ2pȗS6T [_9n7^VbH3 Xk,u>Z}o8]dX^SNڔf7Tn\g/GŷϋԬlq:`W*|g8w=WSڨoe^.Vd#N37ODM~VoW0V/UNJ ͇H>հh{cΜ9hf)U$Ȕ.O5._!v('ק.#ࣉ("wnF<ӌyp7Sap3z±2.ڋ\"%< y<^'\aޔgHAӕ8?L;3 D\L柕aK f&y>mgel%io%)e5n)0. *b:r=A-b!IZ:>hJPMa}9sKƏxχ a/h}94oofh?>ߪI6s~/x9gRsƓc{u?zA(=>5 4ζ's_a_6C=ktUDؗGl/W34z^7RмySA_'&E*XJsR.gUn ?&*LCq-gC\ePlc/~8.ovW=Ȍ 0k^}w/(hr#ʙ67*\:kG@W^2N%I.cI%QMu3N)K"|0:k%Hl01bS"+-qG鑰rEB@9[iǬQFYJy>aaRkY-ěQjKWv3 $mÔR$9.K_dY>\a֊2PnΤmIQ&Rob5Y.,%Gbe"*H8R`*gt>]g4<0mM}3*#\$ADǤ"9XaQ.u:u,"PAP,@29({= [ŚqSƺ]&!h`)I!: ;Esy^+`B?ǚ2$GN| ޱcxT,p7MQü6T4T!BW$$r3[3D3l#d 0AF )0V PfI5*vzv>t3YkH<%B$;"rc6ꦝpYsN>V[iy'ݔap ۳0>N/6[&,/LœZ/"18]"d|Np "^JA) vNCp7_%P0 t f.C(\:$UX.TF]!i"%{y(t28z>ԭpݚ•T#{>jEggmk"MHҹnqL O\qSH_LT(.?fsOTݝuXr4>x5 Ζ#ݍU9TOGvfhX2vB!Ҟ_ͪnnUۥ̪{0CG4@}ou^ Z!WYMz06^X {,)ze=/̵o@O/޼zMˋۋo_`.^}ۗ0F`L +E{^H57*MתΧ|~W{|8j?v2D_/u{%ݘHy*j}vw1ݐ) j&o_KJȆ}\vanzĭ`\0|ؿRz;kHE>ג)E 0Û.@0x5b ̗rX-߮L޸:p6d^r>|Kgp>h4 \9⬾՗u$ٗ!l^4Xs~+\'b& 5tf;v*m @*c~6`*Ik0eOS iG3a+Ls!>@4iTC&ؐ )[)Ƅ((3L4J'\t6Bp'XK2p1HRX:Y# m,ys3^lU~ȽǫpAm3)B@D0Hjʽ,k,,Ղ m@h4BkM( /Q1rBPF25[pml8 ~7U'מ}5\HuE&h"', Y8/jw%+3_zPC (B9*; ,ZPB c!A>tlw"IC4;!:@/('NCP*ZU@X/.X)88 C/a|YB7ztעaFhv4 @kg j/ X!$߅YHc>p=w慔wW?65n`l ܐ=[c̔~ bDx*_{So;ɴB0 s @lΙ2"?jl~hh0ND/5y+AE1l>=*;k_#2Eabu1"bҵ#90B}r5Mhk:g(&_ua;JL2VOϨ{9kr;ݩV蘎كa-mnΒytxY\|q7:fGI^ oE8uMpt}qV>̰/{eWڽrkv\Z3v݃ 6ђv>K4˾;nJdLAcCwFY͖lFeiJgdS芲*!;>H˵r-Kod)ZGPf.EZS4 Bx]h|$Z{ߓ`A'G:RsDȥ8d2Pȁ)E lJ%:'E\I1do1DŽew2 _V5A2XJ<H`I“sŕDPlYD.eLM5%XJl hleD9I$^i**lٮl~[gwNPPz+iPx҆fO@VyqT>ǂd`CE(56`ʠ&\d/D%%Q-S IQdU+DId6ca#26cg fJ3[L2a/zszk wL@Ͽqv/tb?Ů% +)V':ʳƳVP:HYY f s bꘒҪM͎3*/ EMpYw#ǟakvZ 1j7;Mvnx wS\NEBHes!: Rb%+[I" =, 4!g( GѨg[CuQeqd_DŲrP}lݿ;a/a<L>"io" /\ ywZD@o|PY ^F0ֆbN=H!7gk yUIj鲓^sI+u`=|ֺZ+v[>d~ԉ|YtLfR]Th7zi hAl0|ʐ@t[Sq$..J:f{ְEUeǖՏjG{|ěШk@J4tT8K Z0v iaAuk.q;n$( ޡʢEG,ǃ`sT<"%)mwB#`UJ' ɚv`"<2zDU iC޵U$#dc1 !3b!KP,F `e .mCX;7,rb*8%r֤ڬ.L ə$QY,Gޒ- ^F9vob1$- h r *kX]A QXGݿ&_8ɗ;YϘm& {|jնx躡+q3 ~7gܦĜ:\bNq dcgUi(#+)]{1KbN5^K5O~ *z[z'kr"+$@޲=#P,m)/ "-IS&%RJ9ymSl 7ZK&L""d!#){&f O&٥ R}|hሰ*yeg=Fp|I{o>q;ẜy<_;HoǎY~ZPDNDS"X1~^^mAՋox_/9`ڈ 5b ۺQ`W6[׷i=e\iӚ~]?{md/rpZ,J%Jc LB͈*}8f `KyU{X ~ g JM@ V$0 رt(gQ j.ڂ)&4p\4B9M>h\*8t@¡3ٱb"v_Mp<ԏ0`Jx9nsSxN )ĺC_AK>AR1[,R—+*Q|)I&YaFYjcKPey0 CM0<80-x)`0Ffd.*BH@Zں қ I5}m@yRS+gbK'iM&ԪD#Ɔ;;r[N~'vh2=tiGlo7c`MMɗn X39p`RUD{`:+ Pm) 1<MZ3Zfyв֔%i7Ep1,#)S(Q'f$hyXGrvsg#1A+-Njz+p]CIb=o8IJE`QvY+raOʨʹ(ρz۵X- mcANQO}"_m8|wayjVjLߛ<9?A 3hÁY2bwf yʨmՇLݤZrP5q[MZʄ2a^!X&HSE57V"FMsFV;ɬzo$g-נKO-m)ڞ6@۹0qaDB3%geg=6|S/It732WI`\%q٘$fm7W %E3Wo\Qɵgd٘$.eb\%)WV\sŤB\}.*IKZYz3VWn{4Q Gn+U΋*{+tXLį8B4?ٯ.feSOs/1da~ q8cV4L0?3%\t>fd4iQ2䞉Jsr6*+𹘫$l}PRuyCo\I5:]1WI\I\%iu͕T!љ7hz%y0.s ]66iRKX|߼{WG31'D;sʭ&W +EWbKp=W՜N2 ,__*zW!+a?x؟T3SA?]zSlz/"<:73i4q Z[5Ř4ͪxiqEw[,lы:> .Bl?Ư3KTOg L ,H vXT]lM-GUeԤWw!/u)jyK=U9LxUJxc&y`H֧/v$qZp,لM3|N$/DaQPcEQJNx7C_Cc-cnKG:h3avg$\kL9&GLl4m~seK{.zQi`kT(6( 澤tR]Ӗ!A#SI`ɖp 8jjS&<߃_x(>z ڥ+>,[6< w7"u|7oÚxL;pv{ `pek_+A=V5x։q=X};pF 6y6{| I4DrGy0PS8%9e#qRPCPPؐS4MLVlbQjMR]!tXr1?}q<[}a|^.[ ϯWZ+-!&p!EqDtce6 -c@f`C:pty~Ag:z!f^K%iǕkAy-'GViB7cgJp(; wz{Kۜ~w7OOՔ{mﭰbw7iKWj75Rs?/y:*;0 T Q3HX@"J͂{ENG[G#A4cxFٍLO,ϷfyYtNk1PtK#8?ϦG)3wsd*s*W4_q;4NstIKe U# 6TzZ\Otm~1!i&jZt\끹1•,x4fYqMD6͡-Vؿ^xFӔn G]cq GjSuU4 [oZݷ|5iR(eJ(SZ`8i^Jo?>2ClsRbr[wai "J5+mDr#2갊1t J'ub8REo܁M$_[ k_[,K߽6ҚW#0m35XXQbZTHYBT:LM7yzkŇ15j ޼ic4]#ڂo;DdL? lS|lɗE{s4x1RE"L2=b l3i (KX&8!>=ND/Hk$#g9+bcV4G9DŽRn&lb֑aYqGkZ )$iؘ8OcDnZ|A;mG84SuF^B^|6L%| >:y"Ӻpy3_ yFӳ&f􄃇 ќqr%R`̣0ǀ vS)I;Co๜hS=/Q>My+ i7Є{L,c05%a4UTsc%bD0=T+e {J/X1Ez`~.Xl\\4W0@/?f|be)˜A͚˜eژݏ擥 IVyύWO|o-鴤Ioϼ[(hre(gZܨr%Ysvo~$q-_Rkt&w~E0  f"F,wy p-{署DKl6 Q C4jve KȳgҚh3+/K]KJ-3ɋˋTۇXu.="W浬ârsT IL*XOLk\4XJ1P2Ll:Jhsb|E S@ Buƒ  4:&E}daQiƝVc 3"PAPfdVX((u嵓8SOݠ!h`)I!: ;oDLʟŕ"a jx"1+K`6o4cX;Ȉ`a!k)(]4tf2 t^][ neqXאK ŕd,@S2( gK"O科g&EHw)LIɾ[UWآn[(H8ץۧ }ۍ;M97ɧ(U ;ZgeaW~(.|vS}~mF"w07哥ɮH@цϊ7f|0| ,[Gb|HwuÐafUU>4a8*lbSS:kYX^\|obJ{8_!eW+ȫ`l yJ(MR,]M&VU<"<^y|kW{QOXq`i<لd'ūx(j|\CW뒥^VxjV;~*jB;둵v 7p+z]Ȋtp=`!X~ڪ۴yP8PHV]ة`i],q/Ϋ&9OQzO*4:\h xI pX?8q4rFN:n@"sZ;"bqs;!7T Hi}Sd]ShđhUGGJ#R^ÍB=[`UpNC~ϝغC p60oXC{rj[M B5ތR-=&(AsUbUIPy0?BuI:LL9ѲS(%Y˵p8 4q0;c$:'-* iwCPb\m-Vx0 u@wq0`#kq56Vr𻛣_wm6px`QX8*,gՀb: mF; \ϭZO|y|-@LHē@Z J*CH42z0G?(1< :oFx')+R ul#SD%Ƚ?Y] ok*ToT#Dny0h2sI8"FY0Lmd"x*% W$u-O,Q]N/k Y2`G]yS㓮-=w}$:*ly!S/$P'MEK0Tu./$2[UN)fw7ګnGF99!w IH# K.'&^!FExFUzDqy "SƁ>Pl7jMX11hac`RF+tz3 j(g4R-S*$e{YTJ"s%%{Cw4') AzO鎧S0 1?D%8f$NaI:HS9zR`?Asd]-uS#+e ahY1ʓ:G@4*OM:# MO !\aigQ\YIUZpɃ0՛N)@DU*pJL0({4$-wv,ZKQFGυV()5k: D锨ăCx 2;{}` x?w}Zʰ`zuzW6l1q)E?_UrJ(i̓,. &V$V <5.W4;fD ?(JGmz#x J[k*Fi&ruPHY"!("Jj[#Tkicx>\kR1VQ}ᅈMfVqr\Pkj_CO R&\IXt1h:j #F:*Q$wAo!D1.Y MWh_QsQiain;u!t~rdB&*)P;ӆ?yQFQW&^C1i#= Tpxe&V 18*#y7:$N 8,ZJ`U Jn)iґ9-DabX86>^T,3eŖг37Tv4| ^#v/ 蜐O/ **pFBaPbх=ʭ6g4OɞYc8Ipb&܎hF"Jo2ϰ2pvqhP̮Db㨨}7Nm#}=^MWJ"AF' kJ)28eS6&!N ȐR@yTkBD_di/8b I"ʖ_Y̜&+ND"DDmJhLqXEң$ $esrM G$AK9θ2QRőgzd χJ'T+emĦvgK],Kp\}\^$@%G?v6,Ӝx>iy0:@r -$|>.#.v']Gٞ!}OjMpGh4s`)hds5\fZt]3Ԅ9P3U6Ћ Wh."K WZ`]WhfW0\! pE' 0 hw!'ӫ5in낁e <*gY *oI)ɮ rKvlXWiWn Y꺘Uq?_Ւ Yp%)s9 RjK]m߿n}W4zs06!sN޼l#V5H+8[rjM>cP3=> JP\dp*T^kfE4 vc+6w2ہ$xӛQj4^wm0oC?{'\~_;@#.f{Z:ށ7=agnAar%փV|{s^&.{`w_<^,ƛaInq4+uϭ>;]tٺQͳjHY/SXWL2lp2z5桖U"D]%R3W+AAՇ:K2 ouyL._b{}Xsa@-^ɵ&-zDo'ۺ~ۊE|m|nڊuM}3oLՏFyzߑBΒ!ì w{&AMxaMk_o ?؛cFH"@!Kc 2,3"ץ< Ώ~%ᷳtkWs/P>Wmfť+6%frJx^]߻P~Yx~o4Og֕mUzߥ:[SS3q#ʌ(e[TrTjk%tc80ݴ8ڼ8l̝A`.h V] 'N~W8p_|+T䁖ioѴ2 wI✻ !UcS @U;ޱ}DP\} \~6|ƦKn ϐrrS{Qy(zg`4G5i+?`4 cD^gb4u2t;W4K.DQ.ƠyRh 4&[}V>볤1)8TBTʋ8:~s=x8SYp8[ D\ ~M׃d5`mqnC+D]sr$m\~r duAI3;ni Ĉ)X`a>&p,F?Oanbȯ3U&ߛKo\1 ;ѢarB 5&[9]Vn{ͩS`PHxBʙ +Ed$9LHJwm$+ ;H˩:U̇ HHeu]+-$OY{(Y/G23$I]u9)vW0kKL-@uZ}jATcZ`T+f;-"(+Qy*hO`pap%rGJԆ8w\AeTv*` +&5f\/$*Zpvz:UyCj`R,f\ij@4~f\тv @08D}q )\!Ύ+A•"}WPUXpuFnWMW"wWPɊ\!Ț9l2I?uWaAQ{[vʙL35ȞV9[&%;;o q`iQaJێN rT#}#*=zgԲ;DUq WLHEqn9;Qg?g'*yqWLjZ0Ɓpڍ3g'ry9;5VW-:F\Er1@QKWk܎]N19f]M{ijÁp5M%ݳ+]ʇpy\\3 DsǕq8+#+gap%r-+Qk[JTZpu^D1Mp4dWVJT]#[FrW"8ap^Qp%jݕp W΄h,w'Zg6pQz1v6y,NFȍ}'̲(L Yt([ Ms2FܺcR/|^oܓt8*M8ޓs,@3\#r4l> *Yfpfpr j *YWG+fW"8aplՓGJ?w\JZ1*qk!qp%rWPkqW7#QW"sU /2~pwz3pW9fZX\MSKkʹ~'/zh@Iz,D0ap%rWP;Dv WcߏLJap%rWvJTzqJPW4~\`+QwuhqJqA*+Qkq%*-:B\9#ހN+gʜxw:ށ:ἣ\3!8RSZ7"XTr\0}f 6<$8|Iir0߈@sǕql~9DY/\8L wW҄WG||E+0**VWrn7-zxǮS$irJNST NS3+yxC^kfpy4Ft\}ħ5j,:B\m]Ao>&|MS{sd8q6 1@W._Kx~i^ pyx'e?>E5岐V?m8:_r Z.Km|?}ky?I>w7ǒļ*>Z_ch@ۋooVj7wϭ>V[Y}v"Ϟ?+ ?_>‡ W!B wNkX9C(z8ZG˞BE:t~tͷ} B~q}u^bKw-@vLGWwlNMcoκ;JNW?݂ccG;1cH@gϾf(mVOg0m?M00NM:ũZ?{&*ytjdH`0:+Q{;qW•f; Dnp jux\J̃%ڼdN725DەRgϴ? ީdm72M8ogJ3kyEgNB2, +Д0*a6xbv[tNf,CqMNrILn$qgw-ȊM-@{i5 rejAji*綾2$S l5k= 80 DfQKzs[iՓ*Xx7D0+je1\# Vpǹ,FzijW2W Ž]Nƾ;ݬ]MK^8rZ^ 3=!LUXpЮDppW"(+QÂ#ĕ!f$ap%rWPjJT]#SvS~IrWVǹJTZqEΰpa\3L1(ju!4nqWG>z6om.߾Ў(cV/. [~r&ݔw^<ˋ/B~;[7.+pj?Z)r=P䑿Qv FYwQ~˒nRhNy{RKU8BF_OkHs_o?+IݖO7;|r&Z~wR|6K>޻n|)^ݞ*P|Sߎ Y*Qvna96*@Af28â/|@|?n3S~3 }@'~!_D7oo;5}$WPIm??,oN~Su?\L[2z(ZBo[ds$oUn:[w݇} o]_;IϑE|DJbWNZ}nDow?NnW:9]/R']UYS9XOgr'75s49eҡYUcf&ԩXRU+s5Z-:Sy矡ttZ}[_j%n]Usu&wɤZpΉ fZA+1~j"Ttll!K5mp5r׶WzNѦlVh`c45e)ϷaRΕܝZ1U]dlr))5z-pOKߒP`F7vcF4cvW|D\L5ёBE{))ϕ< ߈h`Hfz.S֕6fU3e\Y2d*Cdt](]N%R. Ql%vNĞ봢F%խb `-) ;*D{j%xڐ]jy;0Q/3k6RS,qFnQ0xitv.*68:-f0N<ux;ov,5 (%ʭ:<ĪWiJ|Xdp.M.`kBhclue/yX ĵe+ qi 9:vLWgj˶ " Ȫ[*HJYlXkY:C.D@ %8ٻV(q8+ ROmbb OnAkpjmWHb4)*PF&  ʄW#I}4*(5 76iV2\W j,':IH&[j` ݯ-V8!خh؁YWBnU{CZ`Q [ @ COFYp.3@;iߩ{"W*:#χPɨ[s6 n1 04Ǝ \ DjĥȽ(2ܙ@sBux+ ֑lÿ/J0 7PP9S )0GRF*U Bi-K(Ez@ߑPi6 qJ)ՕH*lEV3{-LI,32-({6`!Ѿ)yAQE e>&dYk|D d@RB6фƌ V*3&m723gJ/޷bF\̘uHNO*DUk5 !0!bE@߯ s#vvvߘx\eOdZ$9eZtkgg{Y] AmzF: x@\8E4dYl̰t%CZRUF"1,> #'vQ|^ŢΈ A9JHDNk F0vr@{@̉|$X֠ix$ۑ< #7<5xNv#P`ZDb908m&k)B(NEok"l<uyq0lTrrѡ",'S 6"1wpSn9h˰{sP6DR\G݅ZCGP#L϶AN(X1zYe9%TmIJw 䁀:Cu ++\36$-DWC޲Agry@0΀@ "25{9v"^!3SiULf_be@¨U8vGyQwk诈Ea^mXYwnjpM#2BPˉ8Z\O`}Eҽ,IQ55h@eVoJj|VUrDo k on%F ky4Y@USŐ:Im2znz~  N'i/_{zL5`;5hfѓEFظ*}mwwR%;Y:54k*FkM!jKBq4RVO ֮ ̘v{rv7{E[fĞL:j7 JȀ-a]M1WdsC<܈ŹCvY0mA )ɔg]nIطuIyh.{?,n1[YiRMR5F/lQ%Z4.Y'+3OFDtB0=`5x}&i` ' Xa^0rr~r`ӑS0)A^g _ة %1<*Bg'ZCS`-2~ ѴaQ=+NF1VF#72R/|z`Yn#XqQI<(ƤO<t4.еoHacPͭp^`ԀtBe_wV:@>xq<=ž0χS,oV8XKZ]٨ed8nR7D&jꫲ\ut9@=7gu\# >TG*u)pVSߴ(,^ (!%1*\@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H tJ vCUJ L="\aQZ-̎Q ;$%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ Rh@Ƭu)4׳'­G hQjRHh')H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@ǫ@&%%գ^ (!%1(lz055]mq?WQR?pR^uZ@~ޢ玚5]7PZ'7r0.aF*{ty#nX[w?\s")HE (R@P")HE (R@P")HE (R@P")HE (R@P")HE (R@P")HE (R@z0FZoşo&jvr*PTJ/iz1YP"BO?]up}-th;]u(ɺ:Fr +x-th;]!J爮VJj j+Dm Q~lz´Z<;]XI[j?Bt=[AWM WUUB QnyDWCWHuEt2پd]#]Ic S~Ptp 5}+D4ҕ2 >8׺Z jd]#]iVn].&̟0gΉQo\KF.qWQu>>,q:*&}.7W$Sc]7-,fM`Nxv i}w+7Wve26*xW-mNB)uJP>e36,4Fii,sFp[KMy)bʺ- ym WmMĉRo%- WZ@UZ@Ђ}BdC 0A# - URVDWj誃k+@˹;]!Jp"Zj j _pDWGHW oksN2SOZ uJ 5YWXzbn= N8)]!Je~~M/ s^g >u''X.f%]IvZ^Kt^p_]`!Y5tuբt(LUDW؛j z.j+D+| Q*Atut%RWDWۆB *&NW]#]i[s/'\j+@+;]!J+V"Cpp9|( )-N6L=|b@ϼhMiQZp4m㮦Ͻɞpm5th;]J%ҕٚbvzbvװZ Ѻӕ1Etut~R{(fw;ꇡ+~Ϧ'ۧ])v m]>'>n4_\J]q6=Ƿi++,.׵Ut($:B\s[ut(&:BfBϼݞh;]!Jˉ `l5tpM5th;]!JoƲ^=afP!'K.+=vz?4}=.1WBӈVDH(cPU 1C[!}lh^nm˲jڏLVȤ;GղmmH-[͞|52bI3X؎*8' fY髊㣛9|1.]'<<,o_۷u,<7(wrY:7܄]c(,.YK,< zKgXQO.S7 /);Ѣlx,E}M \sg`O'[هqY}Ff5xϠ\<;Eϟ߾|4x34t$g1WhI/ChiJY*hx%M]N/݌r,Fͳ`daq,e]p4tXϱC {?z)7 N/?q6ǂvi0sګ\f~}h͸̃f UY .x5j:Y m_Y.\# >TG漼ցsF#NoHHo݃ByУsmoq1-Q,}28Ab S鼤YYLܓѭX`ݯ݃R )[lG \Uf?~hBi}0HXGs[<2GYFv,rhn6 ͈EbMX0֎uwRuzkm+ s#B¥-Fe~Vrq`Vc8dz2,:1?ŒK>9 n /cAˀȮ_͛dG<Nb5^2#asa7pb}ͨx<;EkӿZv50lz&KKjSպI؊z"A֢20e%7$=F ٪ͯݔLCdoF蚘cht6w ~<]}to],nwĭ Q'v)A,Rd\`Jh ȹdYɖ?JDQ%y*ZJ oDP' w\-`r08LۚŐH?]uRo>!chQ]_0xX_wwZ)8hxM^4e/΅P^=IPtj1ɜL4.X3޶m䑷bR 80U7&b bxb(%Uc[|Z}J7 xiM:$yǵ5U4)geK2M9πIh{mZhp08ȭv{No/ 2I/`ɧow{iww}7"8۬pG&;3l06e1һy[SZJ*00XpKtN,Z1ɜag:[eC1O0jZmS^g[Ra޸%PN]R)_By\i) ϝM{BA[qQ<0pץ X*X.E;r&8ʦ?{WƑ /{ؑŀq:9$Y,p!tWwK(RpgHsHJ=bKawMOUSOR 1{iRgYQn[=/uZ4 6ڎ~xc{^jÉgo PLFp_zVHEtTrp>Uh妆(27$5M-E55nnm*)Lwg@kdnwpYR12=eŘWW ]@~^xT] X4vFFׅqLn&x׋TS̆200y=nE-ٛ[ۆ-m.s'+~9yUwmWE𞰛-f0L&rh=p7:ue֝v,w,D.so̙秛'nl; RxZj$11m ՀaSIĺ L+ɑU /besܼhr8ke҆&eF@^F5BLZha;N/k bclM7Լ٩ JͺȴZąG龜jt XVj|RGDRpbFRN~Z|÷s:&|H}a2@Xx\(aVnt/0U;%R%$L'ǁ`S^N4~hЕ's,%'sT5G˷7_=6ҚOZ4[aBUgTEE<:1QϸDSx[1Øi [/yU;K(vn 'o}0/$ -D1RZb"ĂsOj{@|O=ei? J2L)*b: 9'UPhI)jw2Υn],&AUI !Zc $V6FlM=t䡏 -> 靗6 MLKߟ1'd:u5?|"*[W^^ِ?]shC5s)H)C3  yMw q$\\.m [OcR٥$:@ۅ0OXιl=} -?$ ttV™Y țCf9?l^c&)Ks;TChWjE0 mqπiP)Q  !ZFDΞڀxB0YM-O'I2Hщ!Xgΐ5}*M I)bkD璅䤠a91DGtB(ъR#5P~ rMVʳG~~{k,ILYhP9i7Vio<8/5ٟho2g&ߪ?/>O6̯YVkjUj'EL$j *&Բ|T Ţjmzo=fP:t{Qv"{S/φX^o8ȥMz7^G %˾.=i>Z^\B݁bՋrg]*8v `C 4Eg:9]?{ `ĆN#q(1 'L.HFxHuJIP->%%Zv$%X6h>G67 T>l*n~B(lzQݳA(cD>k$V{7E|ǀQ,X$wT"R"(*/ZM9TY먋ULq6y 2I1ו3>;4ĵl݂/ǣ45.=[6]S1憸,(e銋f|SS+. BHE ! -7I"y !D1(9M{؁6%K!dZ;@Y)g@!DI%Cp13i"Rd6PcHDq]^,P|yep te@bNY vcP< @[gv/Azd?ȹ(xyVX\8\y!m:alec[YY]0sէ«ښ[t)l`(/8MK' /8x sN0WB$a}:zPO~*F0TPq6 1 ^<($"(iad[=$)N]P P !Zc dXfԵۚ8Ԫ:]{Hf^o0 ~D1xvUc|kK[ vʮ;DGKN ,Iֹ0 @H[Ku) #-"Ko$3CC_eb=_rV9\!KNj QgzɱWFd!i``'$>$^,"3眒,/Ul]KlJW,^0)Si`Ɂl;Ԗ|2;KwI0R~_y w`gm/z' H^͏m:J)D1 jõ4JG&kq'1 s!z^+=j:k騝}63A9ᢓ>AYA )aMDh8{>;]{XEӸgW]~|&|鞽1^ڎ 7swiy?t >q]xS_܍[tސ-yArw8z|_0~owo,cnu㧧y|⤛c諒_tdGGvҺi_O^]b3oxy0wHiާS iK$>ĉk9>L{fV)ɿ?ta;fǼs+ߦZ+;KnZ4ms'(/6id b=wGhG!EU ;;4ۍ^Aɾ5dwBr)r) V$TO ,|pq&! J26VKH`EDi5ُdߡUx=n땿9G5PQIzbmerF@6!8dV-0xԊAX(jl鵑*ߟ!R%J/B  銚ݎ(,ͷ_;ɁJoG%PnuMjQmCyU򠔖Lw9ILox 2>X#ǝm) 1 ΥAKE=/nJ0@9D$a$ul3 gGU£C] rgOs>rdc՛{v@/?-o߾pyy/%*' FS=DTeTC,nJ{)UPup.duI!Nt̍:ƨbmj "B';PN8?( Gg$P "C Ԋu䚔.\8!PFuW i;Yh#+R3\(0it5rkdtzi 0]1iO5-]. X}r ޑO^Ϫsc\[{F;Jg e% wDOuB*잕fr\)tCRF,t] ;+*)̀?+;*-;_F`79#ֱ! އ! > HYQkqki-k)k] 3Z@UW lypøi_zuEVzuMba5b\_Ͱ륐t]1Ȧʑ|Ms$+_a1L ?gfJsԕHW #AZV3,iuSB z8|}ixӦ@F L4]JPcEb`qQբ+=iN7]PWZh4"]1GWkO4Z)EWs4*[FWt-bZ(Q` ueh+'b\ijӾނS4]QW8fNg\s3$9udFTdiKTfZJ4Q*aghi+ l%=]1&dZS|Pɔnr FV+j옶LE u7 ᢨf*QRߏ̞Yo΄R`N?{9,pO=h96tezW+>ܣ)F]VuŔZ4]PWJY]wjtŸע+}QZif+5EW Xa-tLTjћtELÅjL{ETQjիh@[,.s3 [;%(PUiPO#qED*]LY$ѴFcyLJJU5ӲÊZv.ݽ̞&4eA3);=q) δLrwI \e;):{Tڱ}y+&*atd'!YG!PdRXOZJ9uQgYU81z?96)}f޾"^_^ovq~mX?? 9ew= %TƲ_ק5xk_ =٣Vu]S sto*ǯ=5wZTӻøPRGd1wg;jC=HT+Xՠ^EWHW ]5r`f)Ӫ+9ʃS z+ƭg)jQR+3Lu5 XS {mўj[IPڶ8AWtuhKQtXZtŴ^+B4]PWʠҾ"]]1FWLt u%+FWkE-bZ)i*jEWLkS"6]PWF%Bs J)-ҍiWgǸ^֢i3<-]L)#9je NVBW+ƕ<eZ)ukQW35=e`ozYMtŴ)Kt*rV"֤+&jtŸG RZQt5C]ykQ]1ZtŴyqe{"LXpLvBSGWpѝVWh9Qž MWftJzW5KWkD-bZ,^WLi+$ >ͦт+]WLil u5P].jtŴhj/銀qu5}WLkTbJTMW3ԕ+]VfR/<7\3%jFOHQ{#ʹtѴl/)};"cœ6mӆ _Tzֺ^%5}[-q)٩VlY@rjr&[Bk}=b SK*YzSjݺfص8HWFW{hubJ'f+-V+v5uBki4j2k(5MA?(.0fvǛDTJ]\^rDfC^o]~}}P2e@O7juGq?V޻ ~m:jS=pyyM7/C"H{Jmw׷TX6Z 9Ø!;~~+܄ӎoW{|kbFʘߪzu.TSq/VY_(Um||-oU7*Ǩ͛˂(Q

~a>{pYv2  RC<|?'4̼= C\|n;򊢧>Oy|Ka~36_2x} 2nNȤ)1˨QhtAb/z9߼[^=I{|u6X6Wu ʨ*q!r!!Z+FdF hS|"Πʮߥh!tY䣕t e! j* 1&'M&  CSIbK4%H}Xѐ}J` )ϕ>YK]o4AADi4SOi@DP63P儋N咎 KEe9JW#bc==b'UTXpX,ɓ8@Kah;ΣdsrŇb*Ztm)Y:6˗o!bi j+0Zjf%z亦ܕ)aN",= 1;Lf 0kD34p'-Ռ iTUiԒϯ=G"Z#)9wz[inmS1Jt@=hÍ`,h2 ѡL6(OiG 𕐥v 4fèiLU7*)$UD#@#.DI/ݫ\bjSEKG:SAd*X ſ!IBާOUgUA2țk#8s#ԬF[ ރF%k >ؒknws=i:wK9jsi g16c@FUy mBiLHnKʗ VCJ 6Y ЗI buRƗ ZRVb>iyZm$Bɨ|W!'c#ZZal%E VAt$0H!Q!s(Gh٥=6*˂Bф i'Sn BAnQxi h**u(:ݡ-!xy9xi@yGtuM BRқC:_|/ Ţˋucm̭l򵬁DKMـ0#Ȍ,Hw E@P峦V*Yl(EWC *TD@5w˽AAQU6Mk` Ö` U#\ "U+w>݄*ĬeBGkl};I-G1jTPTZDJg=l4_-P^f$߈B@F=5(!Ȯd܁E7BnTzC{- ȸCLAAV󭋠@zKB€(!2] 4HAjTy>:LAZ qVH bl_N,UhE ytӞEwv4YTd52 Z?zkJBX>MIؔM@[T_ujL!zse_Wqӷ7]o/ X:_-Ǵ7s=W&͂ ]@H77kF֞5`C˿GBggUCŨծ[s55ǤͨyHY#h4v˨ ƴUF߭KoH#t5n(!/{úSi樇*tyF?EtNJdt `%.(H H,@z|!(!=:T}f=. >/`E^1H"^S:Ԇ&@uZwtH.PЅF-)~Ba E5FHKCH4pY/xp`\Sclci0Ih j hN\76xk+fnQ Ciփ*H]6 |t~f҃d &S@rZx tmKzgۮAц?jo9hE}C DU!m:ZSDàe {̀|12 =pe6\HퟟhJ7a#U2'MkO^ (T .nI2[1pQi64fs˥ZpU_.!b!c-ٱP|EśPpԅnqf@Bׄru3B?]wԢ`|JL0H5zFͻ~+v\ _AFZtj!emS#T)a@m9~8yѳ/Hc٘"J~7~_yo>>b`kh&횯xG?{}^X.[`@my'two /-›7wiu˟twu{q/ǯΡ/N×[7 pɯn|u6z}aW[ g@NbpaySN>z4acacXqr H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 uL7M2dqLB=h_(t%zQ@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Nu% n;ׇi@@;29q+AI@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $NuA.Ư` Vi@ ,N FK'bX茜@ .gVWǡ5y8Nt>w Dtŀx[ֳQd&tu>teHy=b+,thCׯ7Oi9xeۅ1oۄ)> ѨcYw1fK5i-8X|KKјߘ.΀;S[/ĸ2=|z0"v7k)0,(;ےhO~ ^~")8q4tjK.B]Jj~ *z>' 'lZ|œ8nt@i,TUZ+ s dJIHF 3q,! XkB9%6j V Q|6&*ت4Mpuf-fRVBϱĆuaAx+khɯ2J/+HW= >:]1\7 FO+F)ΒFϴͣn#h9uJdH}Гz<}M܆#>a8NO;'EW>+t9C}*MDW 8i pr}#Q ]!]ODW 7YoyDҕݟ:]p8 ]nT~bLJ 銂KMDWznpvtb'vס+g~@u֋J;H 4;8'84,p4ɳ4tR#KUso?qZoN~(ÁL| JMŕG]1\cfr5NW'3%3]14Q%5QFkN dwtRQMDW.MS 2Z<> e?]}Zޣ[[]ΦaDA1k(NJx-pǝHqoFK`h%% sb&`Cpf;C(Ez.ΑrE؄ A9CQFXJy>%&zqyJo$i^Omr"ty+x%Dm ,U j'Vu6.͗-+<_M']L9Z\r'c#_;Ųk#:_Q5USђۆ.i9!|;AL1 ԁ(ѨDI$\)E ­Rii !',u!|RiBH"ƑrW^)PfmTg鍝*H5YBǥ, WCUȤ+aRTq&RhQ\kBI~RPypӫ0Iu9N&'xJSó/_-`MΰPg0^ҩk{<P6?/ޘE/ L1##d\LiPҩ]q % T*MC0禭$ԇ#Hn1:E1W$4=z_kCk(lbp~گtڪCbu/|. ̀.Lhf2ͤ -Mƙ$MUV>帼9[h}4HEyq<N"*'*5Xfu|+G|X/AϚM(Wf2V=AvJ }Dz{@<':Cpx`NFs)惌+˜R2ҀiT(g6( t_RLsSIii|5`!1=>{FaG swP({t϶bZ$ha#,*F!=d VRnlCz6HF/gH#9@^s%@ cf I>v_A8X-k})6,K*Z~|7.-\?,VAZ~mm7<Z#u Tᓳ`0ȕnDuI'n-6u6#a)涆 ښF(a26\KZ jvZYhi &􎴦WyG+ȉ9} ZPUƝVe[V3V'p4߸:o\{0kviqmƠ$";-W΃ҞZ)) *"OdžobG\ˆ^FcRd!Rh2""&ZH0<)c"=G {cgssk\6OBsB1퐱P<ʶs! a !Z޷V8rFԇ`Tـ"8B:2R1Zlq 3p!'$!r#'p2 `AiTBȈv\IIv1ׂIoQx½q(mV /TE*L<)|X{pPNJ7bi1g;Ja0P,v n9Ӕ^~UL/*5vIfM;:_u'Q؇37D n . &rzG, O?g:/Gdz:M|nqКȥsZsy Τ3X6x(O ΃!'ӋIpݸI=>3[O^96u R{Rr3pJ1o0 DYE>D~U.')uFOBzQfwwSjyq:OVfUc M'f̎6L5e{A6R(j1QEE"s֕oH"͡fڕ3/iC=mNTڋ1JњGp$iD 1:PI.2"R~^@/"ٱM6Qz٩u Vqv\p}wIuSZ**>tO-ݏҙҙ oDzIRh!íOolBl3 Rb2Wܟ=>o~Q"bcJ[.=E刌:G ;h<ԝn㋂MWRG,m`;w[c;y@@HI))D VTx,2ǃDP픈9Ԝmc]|xԥMu#ZŚhcp5n =c)C wŰ>t8+'\(}p.f>b ^Lp!TH#Stbtu={Z G,x'$x3\DU[JcV{Ar 8MޛgSzEw9lVJH?B"ƈs?F YcBأK;NLY )~l%:g yO]q1Fȗ /cmZ4o4>K]7'3>^:V$u{_$~c e & ¼&BP5;LjnDh:vf3%arݒ||ҽnrt[j0$hO.30Bp{waZO:Ǭ>X;b&C2͌ .Z!n)ɔw4 c&Kלdõ=(@;蘡8"LĈN O(.yl-qirEB@`Pc(,<qys7vۍ]y18DS_9s䶸*z&]5~]iY?|EgIrGT8R$15 `=E39r@( 5. D̦I<ޣsC+27Y56V[[ב &@,`nT%7L=|T?%?T.Q*8ŒDVf/ͯGQ1ċ$q\=0uq#g.HH!G`\ M3)Ad1E Q( WpXB^zϯ]tJB@:hCVw:IyRw&uGkh>]_s޺ lR*T :WHQKoHA`ZbN&+aO!B>GxG1>ITIА>d'c8 =&FA6*f"Ā!\_pgob-+~RRͿ~J5Zulݧ~}ƣWF/1ll4zkz-o^<O7ЗbSsOlɭ[=ue9[u5*6Qh z#Je#lt9BEg5u|_K2 1g۠rC11fD=88Д4L )pX ʐG:dudScfDh^mlC*z3>٩nS}(QShdA)Xŀ"EB3cUuSr-Wzp9\YK*&Yq6{ ɩ A**{p5s'y;d7ֿ9mj)sK\1DoYj4Bq3O6 5^M+ы2!56ߠd&X}`Cy ƻB}kzbGD-C$v:S1ӿ6<DiJd/IːP$ljE (k^H-F9tR\V3gu9.w'ۏrRCZkϣ48 We\({S6fz5 0~>$0K37W0\]47׎6jCsYeN Pm^Mfh DHbDtN uH uC[=&tN$VdQJ!C 9&bt6lL<bZeNͣE[A|Z /KoT92xU8\FUpe9 QEZΞ3=UkV YVG]Co[bW5=*N#zS̵JIBdv(~K8\Β}}%Q rlí F,+6m-/sQjMJ(A>c &GbV|NuQ9mA[YeIqzfJhKQ9y>^7𭣒]Ɏ^Z'JA9T 4sxO'*bC i. OAO鎧S# 4fd1^S&Rt3IQPЖ|˃e`%dQ<@PF2wj2}|ڑ)L>kZ_uUNDŽc<Ʌtd(5N;`!cY䘵CHVf.E ,q;됏ݦaG)lkdаzF`hN+˛YY6^|[O5i0XcA|㝧G"!'f+A~B*FQ:JH $ytH J UFr!"Vu*"k:CLԅ,PD 5Ƅu4`sQ4L'ifV'׋#=b(msE6$sB#o`~ॷMR@#Nf`CCrc꯸e&A%ZZJ"DDgz1&霹 Ѡe\Ihqs('7~?~c;մiџiWK^A\8X:/Os<}梫s^_O Й͢~m+ߟ;tvڙC|}w!pdg.r߷zp[1V]s婻}l`; s^s]b;l_CttZ6o8rl,O"@|&i[)X͙fAp~p iXTx{Oh 2bHΉYv TiBd[[fL̑ 21.ʛB d|*+U l'^aF)cf6UĬ2 #72yq\ 1z^NԶj v[70V|*r1ƅdJQ A<ؔ@lųĜںx.x#9RL̐#E&|Q$a"HE_8$:iW~e5s6_* ` "V@" x*ȂN3Os 3AniQ9IAfIsgd)(ԭm gLi8InF K( I 4P6.V˜˧D.NK]KEQ.ne\wEe$C2k7P6 0{THx*ትzGWpOy#@X^,4mIpCd?|#zT= FzS=u_kAwzHr}7P=dX!V=+2؊;*b \_8cp2_vzY+ވ]JP݂+\9p0GpE\s \ɮUr`pu:p%@2=bpYoX+:WJ\ \GpU ּ7pU5/pU=Xp8\rmt W+2 \s7sWcO Wde|+ ښi˵u-'>ܴ?яkmƼUb ~l~jyM/gp03<LqL޿݈B`` i2RvJ%Si>J2 \GWd2\"\iɬ2=+]jY YLtvO BAoʔZ*aWd2+"pupeV`Y%s) \VCJC {wW㫇3ŷ\1\\p2k ^f\ sYlyoLWJ\ \ %D *־UًTL pupe R2ad\::.f۳V OKU&G!O :4Fhf\fl¿]6-5uW>Y{C.(T qm^?}&kNXGyqrw6]* v>_RkȇŇ3\\N RRW)/.?m=>,FLt7+@U9JܧH>; BG 8D OM619vy_盥tך=GVӖ_>B?PY#􇬑ڰ5X:YeY;AYۧ{Ԝ)*ܓҸ^f%fO$:+D #.ιLh'߸®; چnJm819n\4Wmы2w蒁Kw1LgDWiUC`h f1u'Z_ǔ,FJ 6ۛk] G]90N l8s~?^]>a-՜8N@/*?jYAgUc]̍ r QfZ~ff79%ۨ%7Y$ X- 'A/Y%w7Vs騃Fe2plvljQ1#6J]O*`{[u_M 6^zK$Ku2t_ M-w\}Z(֪O-+l}S J+q#IappC_ ,vE]LXdOAU,{Dc&cUOWEl]!\˻BW >Q>Jkc ]imX#]!\ jvBt F]!`g2*BW>-Q ++k$3)3p ]!đC/~>KqOOBk_(::mV@WsZj[½uuK~Ja{ztŬFwD.44t cF)]`źc]!\nBWVɶ4=]FZ:DW ]IBȥ8 -m+Dٶz"tK֝G+⚐+-Ƿ[j)M+4vԞ'ٽFDZ֥&3tp ]!ZmNW潧WHW)m gB_D D ,et) |b@3_D-m;]!J[W_]#^^*^|#$_"d?]W^<4Bi(u*'Еܡ_t`f%u.]+D+[OW^!]1i1I% ]\YW *cNWR޺ztřьt+{OCi Q ++Ctup]+@kj;] 5ҕ.9K&Eg  J"NWڞ^#]2)l~= Zk<ݕZCܨ?ݒY%Z})ᄐtR(;'KyGhyR',v<Ynrϓ:i>Ѷ;I$ݽv<W)8Lh2Z܀| 7ˎlny7;7!@] j<̣RT8C!!$Yj}$6 _vZvmM(1kuaǶ@U(#y 0cL\&*"&\__jt*F:u7~Q=3u}3~tN?~N_e,.mP7=磶Bo}mb#۞ѥ:;PrEtGU,ƍ k-ryp޿huuyv?'DH ^XQ|$IhrşU5"-2NA1BnBGURCڼ47NB7+ź"eu7ްx;^秲;wO*On[\7< uki~O{|?M{'_#5g=\M!g z:IJK7V+mmj Q1#uų "1A %씢'3SQhՎh9#4I-Rpfŝ"H ^1uJ.%% 1P f ($F3g,wVz=SlpP.p\"É- 7#9Y]ZT 6yy e8Go'.Ajt|1߽-t;77f'oHlq BX7lT@)pg19?۷7^)tݦ WT+WU;yfU7X{JAHVFHF`׈sxRYeMlN&F5 -Td̆o19|YɍTy} +jcp+j_-vM|3tw)(#e]ړvURy[x2UKE6N&)騎JZ jQ;䲌+SU ZwWmGYard@S6V`S)VZm S&aF^തQfTFc?ʘ9e@=)`_ֳ@rNLfflƸ18 mυ "l-nHM"]/r@h PN% T o`eB8PP3%elnx:{.2½g!mr0JDl|;blf&aXV18O'ql k7&g6aֶ=k>7VDUT+"U$Q 7rbI"ijm|.xńFlL>ZzFFIpis){j]^qLBfQ9IA&ISg`Q8i(-7Ne)Zo8MDB90SA;)h (lPb>sh`'>nVl\YRHFZa5_t׷Iq^U}Mv0܎idV*ƒ'a4RD0P&BeD&aދqs.z 6-M;92sהh (xAl?Dgp*gc" 4)2mAǼ<)`l˷$.k P-ITH`QjsR7I+qX+X?.\+ [t&?<繧Mx.)5)p0"B8ΙJy4$hcXe>J\J<o;c94OhO7i2XB9?Tf^ N.}jlwdiRJO!3W Ɂmu3X\zp&-٪ss6BQW@saYVjJ/9@qomBu~.O1T2x"7L-`!G4(*FXr?vO`}% Vq*zSvkuH&|#+k5ğV݁VfCbz7)r"?8*Ϗh}||eWMغ)<񤛛흾.hU?0U7^h)hBiW^Z1 2ztc*z@2?cG~oBJw8}YWvD[%|:SC|T:ATQ[ޓuE͋]-)%f;{U XQ P4Ps"M&,5#nDl$8BDo ='w_klжnZ7t0Dر>h>RHK/M@P6ro$g`HŨS2~8Dmgj= yTJpBf22rT*KDM$*\&Hacpν l>tәhª@.j23\[grzY+ 3r;Ycu`%27>sIpcxIq:B9{iLA`Rܚ Iy`ʨJPY[4%2< Jx h'fg:_Ğzjgyx艵2RXeRpkߧh38?X 13O#̃ cO\zm myp׹q!iH\i-qp乧ƈrDY%U1e~/M1on_QRY5:{ф!P'EM{v#:x KM8*UUbQ+$pO^ʤCdT*Gm'DQ)iPٻ6$]Fw 0YYqe cdHʎgIQP$Q kzU]Ncb,FGh.鵽#醌 6."QTs,E~|α) ipN9֍Noݲ}7׹?kmomqm(/÷G9 v^m3Y9<%:GWq^ĚaQ[ 굔GA#7[,bW!Ao7&^\71nG>qa%QIs- s)$1eha6n 61 c u#:V[71º0nMH܂q%hlO(8 9?ZĘr2/e:[_'x.*m^4Û{_M\#[/߱*%oʌ ]=#|86=Ct_ܖ=?DZ6Xmoo3IIȚ*Gg`xO>gU?dٔMedGNn9C?QgdL,9>hك;e6+PHI &phS MW5e$#`x`jq,b~yC5ps(IhkHJCe)y288)%iԒ9ЧbGwc^o7}1}ӟP^\xs%C)H'5ʔ ~z黃ujiN|p,0]p&2Ơn\ȉjzF^ >;"z'uUE i0 Q// D`FB>I,9$BާyMv:o"V P(&gIU 9o:rNt[B 60MN'Ą!5 .gFcX,.,ĈoW4h #|A;Ĵ1e;q9gXXa2o&C MrH rǍȻS!d}\IHg2rO&xQR{K覬AvtS2ZMNn3tmp48c2A!>7ףTGeQy*պ@# - ЦSzj8N8}4B":E^INo.=hLfKfVƄ9 P \`YFn,7AFJ>Q $w)UYpkmQA6F͞r>t}I4*(I%K"FS!-"uLJVۛp`r0KVAh_}1o ZڬV~T /tI% 271BQMX&"X+rOjb;j\c1FuG$@vFV}lSKvKcv OEv,h.:I,$Rp\7Byh@#k갟HvuDVj갯$Iབ(c9 \*c'P"8M;j4uk>զM穒`#5e17GUF;'# ՠ 0ɞȬi""> ssLm{Rj*@F4T4p˓6\xB1tLd P\>lqǪ ^n֫uWDF)R" Z1LLpw9 t|kM?7]\@G4HD@ +oJjbҔ6T R\* [s"[OVppˉ&*"DB`#j9b"lŸw55oS). Z *un VSJQJcC)ð}]w6L^J~G2fyHT~j#{~9IFs ev8|8B5x{;_֒Dˑpr`=!5bN=UZ)3.H_^e>Q&tu~C˶S飺?T%U•}5|ˋmkB "й&p2v wWW3(U W[ O4UݞkMKumyۻl1?i_T+le$hݟFe۝.m'v_qB̄v&.Vtn~iYa^p*杼|,p6g=pJY'nu cd:0DBR+{/Z ˃MJVoo޼oïo} zW~_pF纇kIO&' ܛozԢqLmu>xW:yͼG( ڶ1vwai<ȇ-=v]Y]wC+LdVYaˏ(~*fm(UF X&B=m 6FiKr_[޼ĄJy$fK𒊂<v*aqL.SU+Xȇލn{&#exY0ʱh6Re704(`" %Ȧ"x# HkMhY#Xa#$F^nSq唢Na*ybkyXW/@(;W^;YjJn ڦ(m vEcYȲt5JU\Ta5USo?‡a &$X|Q .i( @qJjh@b< CcQ&5.+--ҠdfZ9[T1ڱj4ǹY cp =! 9~<06tCc(C6E3k01 ^=^zB `4* ~* 3f*hFKa +$X2u2*}k2sKT]e*UWQWrW/ω YwQG 74fb\gz.p ژ0[D\ vIX oZި5jv匿# iu%s9hƍ}{ViyBj tL:Rt6EiMN,lJ˜qiV-bIYhy5ѕ^,M~Aby!sK@uK H,`\ &qW3V_ 8S UEw^˱Э-%PQ5L!j˳myJ:+wW#Wr<@H9 zwV! AmEzHZݹ_bKX[eʖ==j*>U):DЃ'EAS Y,b`0 e /aue M ]v~jњ`:Aw&YU7ǼԱ2}Y1 (w>9$FGd Q̠.XtXdB\dVQYĥ( LƶܴRimCxbm#+)H2JTT*X-?9:# ̙=tE%?O﮷"f&{kRm% ? y.3.>N-OUcrv7=-ЁƔd,vW]5@Cd6.KS:vF슲*&y| }~{6P3dgEIDz@8dL@ƅPf.EZS4A!Ոu +}rYA`ZȬ׻Rkt=`A= ?!|q1#G%ΧQwԝs9;[ 1gFH)ĈҘ,UDdD)Cu@>kkFN)6ko@E҅bt: !#Yi 'Y3Zw<}w}w]ͰXob?$Ni6_^l>M-Ưr6T=a\|x߿S7xӟ>OίJ*RX:gx6̝$Y:TײFEBn|OmL-/}wbUS%9اNǺ}f_=m绫'| 7_%0YwLߤBp 'c'OQklYnv;Q+ /_2Mui l-{V{p۶9qW,( Iڲ[uP͚SY?tS xtut{z~I>sOGv<[szf>x렏f7r7?mvw|Cxa-ZWL.NE>(:\)$S◰}]|2^bGPt7!Fcu}ŹN !"h!F褰!Egemp1ZW{rJ 2L&^ @fGuy1GY6s`c2JH GfJy J14K),c= B+\ʌ(=Ą$QEPHLBA)i"ڽt^T-W=ctot^2tSAXTQtQLPIHm%3"Jidl(cg$c"-k?tChw8Fw4ﰸ (EM:WgQWC; 4< ، K8\h!܀Ed`]pE]Zdcc)x9(gp.Qi' etmsmi"!Y,%A+Vް8(@"}d)hdM^H{V@le=ClH. yp.BMD@,̞bykBsgFH2Q9kb`mVC[/AA-Er>3ެ( r oB1D- `A[w")k BA(,'_WKvlw}A_7l Gfk68 /=3:1;[-MJc{֞}JScc{ 't?MxSgMi#t:6l3mNkx-X\%*kVݯ[ZnQ "E@ I,m)/)112f  P`2ȢcFO)"$ 8Bg (D "g nV$+gZ`HIZvOڼ~ҮwGc9pϘkhR߷c%qo U@ c:v&cDS1J?"ջ{x7dm.lZyQٵԾnǛ툴iyonHyɿ ۿESmiلEc LB~Цxv %]*A{P&mDtAeu ,n$(HhE59g4$S,E*CQTT0*Y P6@6#E /f7sZtdэ|\O@%f3nwXA8B;~?9ɻ+!OJlSd-XQ[)!x B8De!Oza^Q#l<*Q1bA$Ѓsɂ01238RBR0. қ]"2>AڀB4)©AE飴&SҪ#Ɗ9VA9j5rOw}KFmgWugu=Maj/;njz;p?nSnsaw˥ EFx4ٳYb)ErVibҔQQ<'UϠ5G<ɲԔ!ђś:),C ?(j:uNSߔ5OSzHוux 8.Xs{265=4Y;mH :JCQ. rP.Bԟe}/b@f-CCz(jmM~y`} ^jī=/tYo>JKh,"'R\!) $۔29)DiZJYrq:W$A1yvXWԎHt]s.'2[|ɋe}R %[M~L?)*”W,/>nOԿ [6\-a%6k}CYe_& nO=?dz|rz%̴E Lk?hڧ?e}ȿHGb=v54æŋCs[{v30C (塉w_:E\OTV!V0^z_[h>Gr > w ;{ {nB1ݬ %*clBP7{<&|8$ca?g2UFw=qjάz]}Ѩ:,OLSf0ӉIXCtwP6EݳrϜcNw/&km𴽽xb]N7ūz~n2C zƊ+6U[jZ_o;ژ);m, JD 9s~rL.wo$Hz$@JydJ&+$P:Jdž?T&E 1جxλ&K:襀hYT6\JJ%G(Q.iO(ʁ\u|u~.%5Ƒ%5';(4 qsK>0RIceJ4[/O 2sFgZ{8_i .-׭w 0dz6f2 K cTH*E=-8:UuԹ\%G ,@Fto\(Xg >iOU,g 7)J+ :@b?ތCNd_9k_[LKr^TmW/=Қ'Gjprx\fAZQ%9$q̨D8ʌGUNw<0F\in̼6 -+Ox*v#5opH17hꐜ(+gDh,}*C0ޠ},4ĵ^ iyZwҼKAi"$Ac Dt䬐Io - .cBeLXόU<#AA&$SZDS/URֈ]#"C &Euiܸ<"Pl<*gfքL QdVr|L%fIDr1>G 77}s(Wu03hEҶ8Y^ 4xmbDȝ G?ovϡ ;F;ǸRP d%ifupfApep۫1v뙤r z|wVe9lu[j1H~]=02ǚCfbc)sy9Hi+䌄(K5"ʭ瑤8 ywR5P> U4TW;IeرS[^RIR)m$YJy`ܘ5Gg](G]SIZ1g`+orv"aSB?\>xtcr1,K$%|yyR:ݻ^3JϘwf)62%+L&{͉ 5g2%VJu "53T(3nv8rE} H$X)G|G* IA12H=T9ӍAtYvp%K.Z F9FCnDY)`l#`@80 ot{"9Hc,؈jk<(ENaU6<sVg=q<giYgW}jI_BaR4abۓt6/x[~ ` ޕ\(Pz,I<9IB>ιHmv!YZt,r{;aiޅB))z hP+_3AQƨurTH@Ԭ" U X5GָH-ZZnAg7R18wIW37;;_õlI\|r ț> C=-l0K= UMpF0+ɯ/χMP9 /'g/ Ybe'ASrTige6E GCZ.duٝ<}eBf*/fï4A<(be8| 1 Q){!z/G}/Y5ϕtN'OA$S{Q@q 3L,E@K0|8m)`TP}ద֪kx1EXU]3蝾ݗf]仂̣}C$J)Q`s4 ȶyvh zqE^jȋkvڲc Jkc2G=U`˔@y% L(r }'iQYǒH(C@)$F(gNϝ2& @CR0$5icpX.ٴEgs漲_ҙ k*^U"G-b3楷^$Q-A/pY~坎I ejFpcr)$Df}$']1-"Z-cb,|Kgʴ87d2 HK+M*g'mF+#""c(#@qyW pA]8]m};OHn",ˠ!;gBЉg[k[u8?ڟjj@\@V $@ +_JjbҔفK*7OjkQsk e9DE(TB),Q@/\ʍ qdn7Īn$R:r!J5+?P0ԋ TO(/嵀8+_H\R{<(y(>L_?.6,T. G`}2:xOŕyg/bCeBNDDIƀC;37<ΐДgjIh[8s91 zJK.AZg;YӲu1!Zz9^綢y+a˾Y4Vg*1ThAd:wy \|uwe& +Nq 㕔솫OFoyy ~|w1;8"feg`ӺeY 臋Y-vGo8![J$./l[1t{1o-fT>0 q48 [0bǣ墡2gMTJn.uQmn+'eӼ#$,|6SdTpŸʛTMR`T?./?߿>w}{w8[`}|0@|SEE3RnC6Y-~;|[QErfc.~7 >].Vye5ggcPG6:`ZWTv3re{,TQA&MzwuFhJ$GC p } "!G`REÃqDJ J Oఠ=qѡ%,8ʝtp#hQs@ɌLmud 8uiUlbǴ66+!;8n |85?;2 9g]iyV>^`Fix/ 4Zb:DW0' ]e2Jzz6t(%XQg\hCDWCiDJ݃TOWv=Dt2`B XW*]R(5Lr!ʀ5 ]UpuW *zuQ3.+̞j2\ABWmR%ҕ ]eu2\uhc/ >2Jޯ]Ht~HX@sS.Gk7Cŋ H#7◟߾*&Yjb?#JϐGr R%`-$b6ַ*s/v`:|Rb~e<[3jcOøȦ]]ͩUֶRj8O SA[HS:A뛒? f{sPʬm 22߽^ᄅ5tO`SEg5|U1<8?ǟ16?5'g C_>|@ȵPF9jΔ $]^B.聼%뤍P4BZ%lt)K (<4{4:<4Gtаa0P߸+e$Xtypl5M}`1uV6`Ņ%̆4<ƺ2?K+5Rl{ XU>,|/zޘȆR_9ԠMEX m {Ze3@PԺ, RL1 v[]Ye; R5rwU #" +ʄ$ ŨQAQi+J@ezm|#rd2yA Av%4/r3ki@e ¶7ߺ Dak!$ ( "*ڕ"M4VqC h~!:3g ^M/ĥj*٠db̠JUYa:iB0`Os#v3T]n˚S~{u-x bcgG]`100!ƻA TxU2GH}1[Zm\U&1:#'`]@EE ]ʃJ I"92e(X L1xxLPBb2$kuk+w<o 1t)~V% 9աk4=!V1dxaQMZ uY QH$O7 ?L*ͻcN@rFEK֬kdž@ڡU!:ZSDàe [̀|12 =pe6|_HmhJ7a#5>dNҳ֞fJOQ!,yJi]0d@b- Pi?bt~֘-jUev4& XTdǢhBMK+ބClO](Ijl$dipM(]*?#t=AyG-F(GS΂2aՋ7B>ַ{qz xW&ݻH>Eh|J-gPyG^'d>!Ɏ)gc&(3sWfޛK?WVZ]@vk>;zzuդg׏L7n͗ jy{yr|MxW$cy[V|˽c$NV?9Nݺ="Bvϻ7mܶ|aW[ 3yPq2m81>UA;$'DvqY̱OaAO ķ8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@O 䕉zIN J/ Z w #N'=F@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $NBSnIN v'<N:x'2zN-8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@O H- ), h'Z}N Eq{:38 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@Jt=hk[9o5Vow?]{oWWu|3\͍Kƥ֫oc\z˸`\Jb\ҡgÂG\QthM8tb$t >[] >Z ]1ZNW%'HWV+Vvi1tp}Z ]1xQ(t銌fI Nv1t[0tkWRՓ+gl hvpbAF̡F)Yrw[9 [Pπ=Y}wU O/onZ)w쉲Tz~XoЛggܤJN~FʒO!Oyiٶj|{y{U}>[ndɎ1\뗦X+DKڿ"zɕ7Cj֖rSmL|S𨑥$O8q:te'rN%5?魵>G ZםfS NRVȪ6h1zCeطëö||~߿acd4_9Źj3$?ZH(-QWUf֣o{/&<7%-0 x h>Fie).^]0la_uott( 'CWњ!V+ܥMq)t=ko71J+y"]%\\fv9 %bQyʩ Sϔ~s@ߖ ֣=t@BW_2OW?BsFbBW<]1Jo ]SDW 8FEK+Fk偭] ]}修 tŀ}\ ]1ܨBW6C+;^oCWDreAtŀoaprK+Fkܡ<}WBW_s퐌hV:p3 gD{H?hA4OWj14p] MiQ-4i:Es'C_QWKKr l^=?:~/ustۖMۣpGj'WS8V#QQJ><}y{7 FBi~͟n&|~9L~z=Oq99O@7֏\ 3~ ŔudﶙHlf9+ŏn+2 c1>"^^م/aZoVa~~swV# ܕ"1lH޻9db>/ɹfݞT_]GCMcߝ3miL)8)7T<9(>@CWx 04r|C޻Sm#GE/{w`?fo&Xnd&e;+Z$w˲܊3VSd5YU|ȪZe- TIŋ+,<cЍDhkRu~͐wUg%nvV9<NǯLtfM3͕BŤz a1ys~΢%$2ahК8+MlHeծ4Y 8D`{}.L+|O۾09y,+rVHk%u~3GV?*lzB7ͶWc _n(3sAdt}cUg~6^rR lX'kk87,úk' |2}ymmn[<OzV х\C-  wbW yULrvl.yxXPSvΓ<1%ֲu͕|dG=VRæڊFrp=6 Vzvo6K'^/a(ZT=ήc6g(wS z4j\_/Zz ܕGPxI`p{ǃ9;--gq+-\\}q</˴sNlfXyƚa#o[->X|2bFOѬ3U5GM>*nm^i &{tqE>\U9{@Wa%OBGѾު.׼\G'cg]M`[=y@e{#9#I"K hm') pܡ3r@V\4.])Ҽ!0/R>FC  J|ٮחݚӏsDuQ`Ձ1eHHN#$J|wiQV䩧I <%'Q(!rƽb (HVF)A9`рUBc4&L 5T*-TZ$10K- MLprc(wjO8KNP$c9{VAkf 9o-s%)ɇE5_{tMviQ7-3olRܚ IŔEI8<QY"!$IP% C+nd6^^|m'H$uJ2i",B~&4~,ޱ1kN4O.2_l,{"*hMqxR/#hsB\i-q4乧L9"ĄIw&^[uZ cD= um b(I_nPƟn+>NϊkiMH&y 8MfWqVϋKA(ep\OfwUe"A񋿍0zƔ*ׂtc-.PKq7ud]t`-\ȍچv!A0m#gqrTfl}3b%`Bt+ޫ&[)O` g6$1%uChgm]R_xo)c܎)~>SɳƐ{ZCTNn> n[h㶎(2w$;ʗ`"mֹBzmvŦ*rYo8NI{|r; węm8!k6ʻrMq=#Ӕ_Ĕ.LBm֭_AQm{f>7M 0 ojQkeG&) vLPq^x3l ocޅ"#wc\XudօfDg"^C"-48߽,GSZT^`l ZR@5R X.D@,40Bڊ`-w'uybqw#տų&{{gPΨ2g5kj#u;uph^JW^Sb2iC2RL)k. I0֊EOQzC(8>SF>Սolɞn@?W*yĪ2t O^WM/]%~0CǪ%+5$yPR(B\TY $WL|0r ~K#}C~ȠV.@CᎪ;Ps 2jDB]/oNIv:T6qKz/*V-hb7 |7/chvw%DOf[쏛;px0XHv-<3Jh[8 ЉrԃgSʘA#0HFx !)[ ʠ#:JbH_5>Aԛl苦< Pe])u݃> FjϣFP*4ae@rGM%r(PKJpT&o*ZJZD+3&iLjPR c %g}rB BZxohHr$} y[ zB#];M(vJ9?DZ;+5ſ6! r! i$Ϥʐ8$@ F"G"Cc z@CWpʢA1팜}2Wz+<.l&^p\|oW/ cadʏ=׊Z`&YY^e9^*oU-ZXйL^(fW.R:ɸMk]XxC-0ƥz {r@֜\ 94!%D^ܱ踕1E $@| "1& N? BPg!ӏS4RˈVG84մOaj6SǏlYktdhʝ&zSa].^dOO.Dvbs씌ȁn8+cgk^ޛ? PȐІitS(H єjDe-VITy\dŋ8RJ}fa+.PU1EfxDh)P܆p igpYQFOݳqؿ/dqj,T`C_ch)tJ e,JJtbdqߡ:#|L^CR+MBBS'%zn"L (8F ӜII9Ѫ?5o12:, Lqve_E&U)DTdx>FY)B{şylX>. %*~|@Ju QwTI=K8^VW7o4X&$* }ޛ^Leۉ^l?]^q[H}R~Fuyvᯕœ]/ޮb4<{>i4\,3 AFt=ctԎςPP[GyaX0ì;CIp,|2-'sx5z9I[Ged˗l]ᜬF-?.%$,}2u+ 'dq@NծN Njǃ0^";׿w?|w(3~w? cᬕl~y>C۟0jho947bkmH]/E .6&_60~J\S.IYv.>eȡDZ#T?*fg{ oHڠZ+za?irͫԻeef6laOHlRp GRTª ױC b?)_.S*\1H\Iv?nhzv)(H(=R(, #q2!rEupxvZ{8EÉӜ9zEE#ƛ#e8)t:LT6'NL6 -]yU]˄x c]aY03&BHPM7BHZB< ` w2YNvGOI4|(OqڧF{Y)BCΡG0 038'>omTDlT҃&O !J?\ޢ րSo0*{wWi؛}e5#fo{_S'8Gofl~~K!N- ķ_bݠ(%9[L* #LWѵL[ҎR͂R^: WO q,#>agHSO74o~o&}z4r(iDgB8&dؿ?a*{\iT0)݁ HSҋZ2IVBKT ^.YW7 z݌5%/D'/=[ {%wɋnnӵK G 3[SUk~1?`>P󫽛2fz;N^s t}t6{ۣB G藭Z79`h v2W7_UM/UFzP2~T-2'4AKvf+/%Pr{Lg~&Ɖu`M:%ZZ!:2L ,$mĘHsYDZEYxYq}פgH.P!sB 6YWkO0u۪Q Y/dj&A z*5mTӈ>HU3> hficTsx/Y #GYexVytఔRei -v\JJfnlJ#bYVg F .{gS;j9 5qZ8JKd2qZMa۠\FpQ: B$Al]:>;itͦךoc°0{I[hyi)IYMa@* ;8+R4R~!c>W iF +˓.y}Gѧ59vl_߈$}:i}oIa S XO /z['ԃ|(}Jů_x•6#qJX5뎲?4c(K{Yfj vvkn zyfEG ]Z~JmK!vدhq)_seAUvF9oEv}.͆_&t]pSrC dإk; UI(]ǫU75]ѐ obۘ+njx䑇.y$#CotgEwVٺcuWO^gL;ݠ^ g뤺 i 8πET< - 28DI,!t31xI3M5=JrLK =Q&KdS>E|iKPن$YB+OII4RS̓Z3ZVy\QeMDKS6OcE΄\B0jnd1c3G*bVݚ8# ㊉y* khڶCn&* @ER.2ƀK('mIN.s\aJ$ ! YA$MX"OE1%(I;2&xXaO['9 " .:sΐJyhW94%H 2K\& *"Z)d'lVDs}10URٙ,iZ8 QҖ5qGĻDC 'ȥSu6KE2.B.neDIq#XY$(^d8lG$1px2ΨPXO-"JݚTkjjYllઈkIk@:\)ꛁ+q-SyqzӤUeiRcWzPF>YiH`%UW¹UVS+Pd`^!\q53+p8* s.pUUHWWBKd 9*⾴I v`:zpZKH`ll|WEZOvp J˸@N!Շzl*WuJFѴ?Yfp1y!?1/ͭzs@3*{.ko7?ޛaϋ&HRFô1uV60#ܰe}i:EUv~ [ٞW^2C$Ԕf*I|כ|WJ7 -μ_ Uf+/2%:C`Ϥ.ɔ&bi)*D XM7|XxΟ U;U bdWmfW謪ZcE!hEXڜ?5qMw;`6.z5s) ctasq[+ }(;OJ8 S23XfYӧ-fC6*ZAĝ7~QM6\l B?rhrVV0z"=gE]aoJB~GELu˷k?Wi蛱uq8nȣYdCΖYH:q&vOYzHcVmjta~ Tx7h9xbBoS[+h}pk;ǵ<(a3ڭ)٭)ڳ\&iϩ)ۭy53~ƍk.|w %8l+TQ)mLbx) ?R R ā СPp&9)N)!/%^tҶ#"*eƔ@eUnwȓ3"Lf̶5qv6.T|rJyIxw3 uV'm{֞wbnW=53hJӈhz.qReNtpr͌UQ h$&` ,(n`Afo*YeUL28l |6h9apT9& x<M󦴋kg_:])/B0[5^'wW]}'-hLHW(Yb ,;1{XgBÄ\>2<2:#2)KnJk#%t'C9/7=Õ[p %2LIcL>v4`26ul讒 _U2`tYH*9,2$wXoݼ) }6r#"̗Mp6, e'$@;@Aȱ[Y$+^,Ֆ,=r,㑻[d]U|Xu Jè#6+^%kX )&R s,aξwPpw X)`喃->3r6^Սo&_bH,$Bj8:#x5X'_B}R:S^FITLmMd1.!xuv,i;gwn0كnûzC~lQWC5L6ƘM<m3vCSdI&y:H2N2"A>,xH@ŨzƇŷ ͙jfpWM̋U÷7a|\o!#lJv>[gqV|(.lyW,|-܂,t0ʍEO\_-$`Xt.nq q5F3Hnh-DMQE:I"I^t * ^R\9O-eY,`)G6l f'1Rׯۋ<%_<]pYe+7,*"l*ZL us=[7X,:EP=чUR )iD4fJJRuz#gQ״R宕0/:/`XeSZ}Xhj̋!ݗu;|o,w=e D`e‰lיVieT']M]bz?gm>f| ƇoDZqv8ۓ;0`Ãa ֈ)92TIZI'#Hْ:8E-kX ҎӐ3HɽUt>a [t6flA!/{Ƥa9צOS}^Z;^˭/: 27xTpr# ׼ʕ:EdRl'fhÄBx)x/̍YM2*d4:i1b`<,5F*jp'{< UcNk{qqe?Ɛ7չdTfYe%/Co!I"~ks8_7+TKZ$DJ K~fFH[`*>88)ԯ uZF 1obŧ4BdXζ 7&PaGK $^gq~skmcp\`'4q2H+ FV}6s+̺dYzjhB}!篫a_<='ޛx5M6 rrkɥka:f>|Gn>;ߟ4͎ZozpZU2%/fMku7K^jBiYf]☗|qt&]ֈO&Wٺ1m{˫7l 1V=  Gdx2삃meqͧ$@!:zI=^iW7I,[CʣVG Y+z6>W)u6Ȯ^ƊB)1u\]y&!c6jګ '`p6jF O&ix~{~?ۣ/~{ۛ~ɫ_x.sY:I0[wZ57Z9\k稧FN9~V7bK<_Y[R?Wq;|oNkvףnܬn h&t^ MhT)onx/RB#`@W6#RpzƸFZNb^AUNj1(Vft-1Ǩ34, Op x|ᾎBQMT []t#0df|MaH J":~_}q:ߒ};>O5= Jԝx!󴽍niHʝ< ,UНO^h y,17n s<>xijͭM 'hX4٦|]&m?M!/(AM&MP)Y asAA(5_1)BIߠ k|֑L)X 2Ȗ"s-c\n*}bFjpڥo-vށ7cI3fi5'h<EWq,Ep+Ep}>1 y4Iǭ:#hQ̩oY^ c#fI#WF;!y,I V_R}PS =Ыr6H'\ts" "o 0Pb*xy덜"StY"* U BTPJehӖNš̮m*ݕ|oBbs pί ^^B=nk]/i@iԺ ^իQIj ٸ2ۚx.Mi܊ f8bUS19اF6$kY Bw >Zn| rRr L`{k;7nb=h[23*Z"c:o:E(7@|^lufV3WqC[^No*'w"*gu̡W:)CIfǒ&I(:]%όI2>uDE(j(qɪ'ZRQR`Q+k:F`2Ca%շf썜՚?қ.36Յ~ .|yUqGM5ty2M899dXcWdĢe{T$#XUYTc-c"TT Y|"թjB(jS1p2 laN8_$0*"}lo"/rVkx6sWvoܱ֞Ok#8ޑ&YkH 5&P--0ZJF9kqDEWmlHi@Tk(,,]2 sME*BYF٨6aoևQ?Uhc'4⼽Fk)P {DpɢmaQ%k0Šl! H%zՈށ"Y_Eez&Rb;3%-Ѫ`S=޳F썜̯:^-wlezzqE`TM StQYz^)ژ]tO>vIdz1ľcW)/AxYl k`m[Pǩ{SceVzO|6| |Ӧ шq_ɷ$xo'ܐQۘbxG)1IsB)EH6Lf+U1W DDVٚ2ze}D@9!tFhINLz½Ʀ0]hP+DW ͺPIs"(]C$ yUB쎤PH@Q6򏰅dpŔ\#R XH2X2ֵs ԏ*L.&&,Z&+|ILIgK-)32mC":OIru|K1"jJuTAT(]kڀ*{]@eld"O(sڋͰV%hk^h3hDjq~î=EHyj55a BZ])݀gYhyn"c]p8IޚI'ðGSN 9 !UT@wmژk=kˋIk)UdɯXZdqh*MIGz?[gkMs5 q.#dJAT:rB]`08"Gα0[}7r* .,sNu(xHA^(u^Z}[)eA/꽰<츏>OBCM((1gZ{ɕ_Sܶ~ "b7A`Aֲylb%KI%{z**V &.\t, 5 @Aa,zs\Z%CwE.xUS9!# rnEHg(q].0wTҿ@TEJ-Ti}5mT7\*B1BE N+}+>Bxzu-3B9`%Z-#%i뜑.:]vPcb1KmCC-e*# ~D/Er s$HR>x%HmgjKA7UϨ^w+<) _8V}dUZxcNxu44TYhcR>JJ+!! lT֫/s*Gh]'U2@)vaT8LήB]l;u"e2u^~'l6j3F *DWE",wNj;ꞁsU{\usi ڏ@2u:j Xr)6,tcPhC]Q1J q)+̒&sJ`pfs!d[@&ΞigRx,1Bv*K̀C _+'<@2 M)AEn# !I$Xˆ5qج$-}r{J?M ɧ=ʼJw{mMZ v<Ӗ~o#0URHdZrK4l"KƔcLBL v h2l5=yk'iҚ\R{lUB ˆ@?"LΏg|"& )oBtSzH2񝚞[WFt!JcNl"G^F(Lp4 209fQi}kZFke][tv_p(`1qa'^jÈn}]t7nƕ>I@.% `@dC]JqPg"Ό?݌rp䕦IZB ֧Xyg'Ù垄H(Vb.e v0/7Wi?̣-NeN ".ei6|-߆/1לi{a9{wB'~OS4M{&Lv) BnK}>1bxнا{{pvszo"|[&Jj0e?υI޹cHЇ.u~]z BtL0A|!Y{pNJ 1=f,7(to:*$zpOQmSm[7~QlV(O):#NI6#˗8qalWa^@u7Ϧ/Pi|L B]Yƭ[q{f#}9ޓ.&xԨf_ixd|L+[i~<ղ3j)ocYi5 :_wh&77xi7D!ڈtQC"-ܿܽu[SѠU!9j %I<(@]R[2rCNWdgoҖT ߁9$phug g2Rƨ1H+)+atFngWf`Wgws80A65E\x׺y8Ju^oNvOt :掗 o\n4clT2f$X9f\VQP΄НJ6^,L!ڂ:eAWO>6pLͻpG6qR|Em0/}j|T_MPOjTc2phE5LEY$E!}"TR<&.C(oIwnL4-j |k(v-|7'o.f2ϡeqrPi4/s^FC'A»k&?+TwoGظ /r7k /?.a˂~JsJPۆ?^CCH; /jq7/wfwtn| z|IDLfGmVY./z4{_Gwő <583^·^?JBf,>tq2++hfb%YfPtSsNw~< LQ-8& >xiM,R fD_5RꐤXפ%)ϊQJ';R1Z+L]yܥ% ]YsrnGWvw0&UYr_Q4%8?{3|tiCj{T 5@n1物3eD/E0J:r (BtPe[z`nݺ}ZP|}aoFz?`[6 rmÑ^f8<ܒ'ȾX̓yJ5oM|uvU^NE=B8XYFאΥT'TX!]4TyfP}5-)b2+1ɤ;.+cH )LI GK΅p EJ RIrL`N$dQ3Z%%-zbqڈFJܻK[QKlq*ͮm*>F}t* U7LC$/,sހ4ՐBgsU<י z΍$\9W[+Hrbe nM5x#zz<fyvW_hWp~OQ;γxmʃ5FTMn[,h!-R$r*B#1ٙYwvyv>@ Yp@dBl!A`^cF fI8)b yA9w{!oz/2! 3,VZX1v&4ZAeF EaeFNt*M e; []&V2,QnJȱjP)3?;]M3~_N|.dyTuow=c+gϿ! ̹|55<,*w5w5ÚIam롶`('Vܰ$2JBf6VW6(`*V;&W&$qNNS-׃`yJΐEL& i3:m5 #y. oyq0fbڑoJ2 r!dĘP2:T .(F(4 Ly@B&Ξq5=#*acW ,~E~hIUCJx b=7,{=]]ny-IhYvפ셌Y.#H&^* 7UF %ǴF-tN,c3 0HB˜uªdu\W:yg9HáhU YT}'TT!QSS+^ xCf5fNRYiǜ8-CߙZg赒=MLafK-M{)RNO O!FbE # RyjHy:m`źϦ78sr8Yn9#GUbu^Ah-NF]7y!u`ϳy]/zj_{+ VvjujyM窳tuy/Mr^}Ow?~+^}wg`S2(NA Co0i54Z{о]'zƽ>FޡW |)}?Y,_wç{*:׷qJ{ j~z'2YuTR*Kԛv0`n@K>VjJ~L6$)Q1S!- 3x"m]@ɔq@e]pn 8Vݝ7Pн]4dJo첄J,BCWQȮG4uO .KD%E >'mE’&b :]4P4yBZCNgR]\q_9p=W0Szi@z;w}WH1v; o/?%;`IH/:D $8bb7݁ݝCw}:HIaJ Q#EQF6bBTA4r9[S 9H)"dD)ct >mk&NL^M&+ޤ|E:dbL'+Y3Zo'5NraW!~K Niv>UX[x{Mxө)ԋFsrIcH`gfooxO_b`*5qOɺaC7 >zվq:ODd*('oR!{8IŘltԸVj0=~ȆQaI=j1;^vV{p4k>ߛDfֻAt9 dM'/843G^l5]F'fN2Vru91YNڵ*ڢV~5Xݚ]^?xVR=w+ܺݪy:+:=isQ n;CnqnZi;_ƹFgZn_f{Κg|DSYTN+s]oI=Y4n@ieηyȷS Gam,B+ГÌG~ϑH+ bl]tAVtN fC(9\Lc97b^ ! ؍R2:$T*' D,BX h!(rqZh1wv}dRu}j[qv˙! /5A2Br WF G[|1Qfd*tL>aSE \P`%l7ʱYIF{_ +j3qVnާF2,/Np:99[^7 <ZOv9C`Jii _ &8%CdRtFhK{h+DQR=-*PQdU+jA(+ŌTkL-c;6[aO{ujjvۊMYҘ%HRa-RPhEQ9B[{Xu17Cfd(褳c[CuQeekm?ae:f\m{~U Xfc',G\,Hs+EL[-O\r(HX Nƚ!8HZES:`82&T;JLI+p:: %mlsE}Gl̥]u6[%]]V$ cZ):T@z6dFIdGxvqgpbձ+55L~4mIpc#e?z"S҂ 獨1 цNBRhE40Rc6Y$ek8I< j0%(y cYIEr T .;5 T5I@HiWKiH6]@2r^x:QxF71p3q]hv0IwxO=Ksks:0WRK_蝈ZGV +DZ`m3AFMMaP< [xA)fM"14w&~IPi*q Sex@^ $T:)i1)jW1s1,EJEttQTkX3#LF%:0rԱұy^J(uYv%hS`|p`|B0H,QWS; g91 L{=+,," XD+V.^Qp6iL22179*ngp.26څ6Nw "2F)5A#VXD]R2NHsg, 9b^k)ȓ̎sS҆P&gFee1G+!c2,ÛX@jQF@rERnSzKZc ;xČf dXuݩPr3c`e`N]30^ ǜֵ`|kt)x⯼Sϩ> ?o_mɷ/~O~DžbgwX ]_fO~ՀNJmp~;I_i7dwR35 ͒Q0'z=7sb>~Ulס.gIcf{96]Puᶫ킯e5' _;%cۆM5fGֽU߬'Do<սU4gy<.6麟W*|E.a;.țw_96gNݟ3$wo&ݔ @y:6DAH \(Zc1XHT:%Bp#2/o=yX ,7bXv_w[qp09J39uq#!-m:OHl"UI!t@;$mPGpLIw`9,_u_e@ &i'bH7ZH> PFj_ϼ A51y]oK@&kA5ϲ@.G&4([ VEP9z'a1G sʹ9g>.ݲF?&8Pn`hi=-d&@8B;~_[)+|pIM*c•#ju#.D!lpDY 0䩬ՏEkyG0G0 9(8o+B29M!= XRbRKc-T\.$5xd IVNCi SҪD,XcLkdr[0Ow}kм.ﭷb{7Uۇ%\L?vrX/{TU(,2ȎCXZAh¤))6Kicxf{MϠ-fya)!$C)`}N`cP ΉJ?FTA]xYaGLñȖkQ}LuΤZؑ(Bel [ ‹/7<3J+yh1-CCW u#Pڳr:7^jaī]/tY_B& m֤\].H P}vR1&6y9[Tj ::jue_;zW|Y.н}_'{=r 慧tN(&z%h*$2CH^342wwsqd (¡(g`xg/I(E0LS>E'>+%YԒ>У5ЧbWca?L7d?^Kno/"q۪#fy- w6J%+=iH-(fN R 4;.hVF)=3fp'dP` dUS1WYC?\e)A+xR X8sťTUc7WYJཹz ((2W(cd ei 7WYJzJPř9!sc*Ki4W +VIyNYUr0ZyPVpyu`@P ˫ 8c|8oDAiAԀ!!g/Ux{߶̠֥4^96%Uvq0Ex|N/2oCry+_'߅~69ݽx*mV8=\SUStbM +uu#5aP  I5gQ v/i7/H__ٟ) jiJ A=w޼l~%zѦC?_Cߎ{~t3uX4ACY\q2,:z_SR爆$Z-û]LDZ/kUj -@ʊQ^vPfCZXZD OL3Eb9̩ؗjW/_-B2%(atDJJMd3B9\!T nVMDxPOW7]JBxn|TB+Iz LK* %aR9,0J$q(F.֎z ,Xj{ \$\R9ezi(Ebh18o3 RHF%dSwiE"2\tFT<:n3:H9ѺīW :оUz[, ȍ&~:*}Ζ̍2ᢓ+ҳr/Yge!9euGfsNB€YhOSaXTF(Jq,J}xn|GlC0㦋x1hm?M'[PN-B>JwVY*ʇ=xWB JckUg.+G]Ѣr'7ʐ=zFR?Ñ3jrS~:ѧO1Wz.4rW˵n~Z˛on. v*4ĞɃz6,%v~]JG\46%| n@x]JRteKMߚ^C厐Etmdwli|v|uQ]!7nt5yfخ6ix|{qXM+Wnևtv.Zwy4ە~mz֍oiZV_Y&ئ7nn|]ɑϿH8n9=mOi|5b06*iQ9[ť_XvsYpDCcbCGxp{ҽ.P ĴȀx% %+%%0bRBE{;6o[L11?~-v<1ߞϘ` UjDpZ*ocЂ#l\IXp"] w[9;Q#8Q$S6$Q 5PPtO>QcI^]1pu'9ֹ۰;; ~8rv6pm5D3?&mgc aTN,`Noh#S(J!$H-Z)CZcTK AR-E1mB#յeL햱;bW-4-<)[6gyq ieurGzEWv<:-߹NNHшhxIT)Tр3bDSأjm#3k fFsl69' NBPpoGI L#Y%m;gvIXbL;ڸ%[m[nx4^)Ta)m P&Լ\SML hőH,SԘnaRYﴠ;dDpyThkBĹ"^"pD IMuTvl;g=,7 g~6b;ӏ7-O VQc=.U¢hRTzDEmFD%Sh4%D*qIRQ .o0eTqgzI3P* >vl;gECG..ȥcuv%Eֱ]]F$@%yl$AY9 "\%)"TyzJUtiru]q1ʹJ&b 紲Έ踤{^Qd c1f MER'J.U qAEMHT0$7,ͱy"{=dgN7{Vv np'rj4Zp0tf^|60Ӛ`+u8x λτ~ J^}:zw}s2`*A i& Q1b/Cw$2 W"H6Zpgm=7{HB8AO5ZkbQWRuyݕ8]+tDIzKoXfv@pr#suv+ sE>O}A{}(v೅!"X " -ZD)QcPV(}kľ b@"TGesC 8S\8H\}.hCL$]Tw ^( F8Bdi5BWlGA֦#BA Zνo[ꪔE{ _u7DO űGGO`G<0bTTRMh$.-EGL"`DR16Trct󫓯9}B᱃WkQX0k<]v p_l{DHQ- դF e'S:N&uıפRJפ~5 I^kx9]-$HΌ.7tႳbL8Bh_z-4GelmaJ:1 aNx.>2;-8 kK "$%9HXN!A騼nɧ0ǃ@$c/!ă2L,h5T"=&b:~aҙ8;FΝ.]1ȈȵT"ukgGA;og+~?xē,gLmgF.RD RP2iTJ: ɼ"-B!rƽa h(-JZ)74`$`I&IyC]yؙYba  B`87u$@@Pw+UZa.lPKp3"ا6ڃLrR"!wK9 Q X͆6q[ec[j&J=4{gvV(q7jcU^xU>Ih:|GUǤ7 13HX@"J͂{ENQoYj O=74=yiqКr0k=Q2*8BF1m:rpDHcyj< IFHsG1As-7:U=f )SF!lzRK_? Z*#Q:OG{b1,L^*Nw Ww Ѝz֥: N?_L2$NZQ }>ju?.J?焺iG׭{ݰcLǤap"/{LW"%fkC0a sk@>mp]hG?v纺?:]kɗIGWd]xâ+I`Cy: VޡUW}݂nzᵔgs8 BӃ Qch XvZ(2 &N;~Q]QoW3FFГ?K"n!BΟ9Wbp[,!bU/I< 2vr:;_iI2qgyq7 Ehti`g{۞^k&0gDgg]p ~gIDS{yu/hUvЗB쵎Ŗv\,hvދ1chU5DO̷.boG0v}2vi`?z@q<Wk,䦔lQcP>sʥ]#Qb[uwstUe U-TCMR h͠VwLcŰ&$$K.s\hWxpߩwd΁OB1cl:#Sd}~) 5bwz{ά4N$y,מ! Z9G0SZ,Ye|c@EմVÒ+0IL 8mBqXmPu"rROU(9&x0$ Ɇ:6SX eyԉBPA/LpB|16:c6 ^z3GE\1 +݂zyB z>b6a5l#5Sof]Jl8dq%?ݨ{L,c05%a4UTsc%bDJvB(vZhq1܌G}rrWt ZZULlpZ#hk"sjtSx91²Z5ịUh4BkM( 8%&*$l 5T!S&g=zj7_Mߺ>EY+ ]~=bG`\H}r?55h`l _~.ٖ i3USZm<-dzZ%aZ 7 G ,/w›7eTD\ lj{ ~ep(1#[hQ"3 Vn7L6kw'L^j/.wFAwXT1+ƃc+%͠\$YGU uάigdJLwoiq[f;C9dn$wADp0MG <=#GP.!j=16K/zXotۿh˷V~[Q~'LcaTJ0OQPO@AXD6r W$CX8V/]ke[Ku*Rfd+"2Uu_f홦Ro40i+51xQ.>ii4YK<h0d"ӯ(b3G$6Lt S,1)K HL Sk0Ώ=L sT:”T-vrTݯE,:S W*S*}ك; c+a * 'Ux^bʇ»:q8Ζ#KݍyٲTZvn:ZHwxnӦ[Њ@QW.WUCVWj1^aS+\Oz8}{8WJZ'UhZ0M񇉆9tzn=Y >8kXXwS}˛̵x~;ON߽?>D~wxz:JģEQHnUßP5joR5U>U:j{ԫHTmMGB(=~~߷%tXH-1 it}'_A/x0s- ~)bܥ/L X2}Zzq>ZiK%A[¬T3#sСU>g 3tnE+'ϑw*\S˺0]aO7]yNג)E 0Û'ɋ566a/cjxo(`k U c逰 OjsmH2 t$?,K z^.}5EkiĈ-vWVB+o 48BGZC!v}kLn0YQ=+ق_$ߗdEqm_u1IwS&ל{L͛o?ԇxWQ/,>Ύ;68:T#[ꑭ蝍Z!iB*4fC>\\բ;H坭F\ WJy*"\`rhprWWWĕЬ+ͻhpErԢ}ߕR Z$#j Xs}rl W[\5f_4'*cINٸrY]6MkLܨ'mli\LzTOKݫjZ_ZPn8{L 2H@L =FwnD lvdE9ުׂMAֳX߬'^=fQ&~*Pg$yOrbV$^(];~Muk7)m8a307{o#^hOgfbO_ˈt_ujJ`02-^Vk^%//mvmB)} 7ʬvXT):M1Uʛȥ jѲ{X RU! `fP)ŋyaaS'3' Ε.7Fn` #{0ʦ*7tj>'3эr-3Wc V&lD<7,WXX 4fZ`ׂmlDb>i1,}," z?p*G\ W(e]D"ZhprWt+V8qƂxH\MtjWRqniz=Wɗtu{IZq\uSDp;J@?ʈpEZ X}D3jV[o" v>\\/t,b+Vqq,&H24YbR q5@\i \Db>\\ *\Zm+VihhK kU05b";tuH3c*IC I YvSiF1mϺ١"FhprU4bWr*+AaL$W,G&F@q*g ML^ hXmWzF+^D4=h&pZ}t|ģqbpeZLJNnU' NE7D#T]2#5=PS[wl \\c}Ꞷq*Q $RcEF+ RG+e,b+V0j~먢+"\iBF]Zc+V̈!JZXwE>8Iկb1w5HNԢT׫bA?5eFL=8R*Hu|A, J/2۳o_EN6;N͗u|8Z2ާu^,IUy_M3_YPdFW;F"`mPY-O[_Mrv9|c43Ɯo[=ݵ^o6|7 ndpt1L׶jR%8)boSSTio>rYB_猪wkW,`]˄RpqbbxΒ[׹SupעÞ\h?1_y%?(9 fL~FUӯ[3?%a60n$ *9M^ɻr|#U>3hIqu9W%#8Z/r,*rwSyE<<9A2׳bU &|Fٷvnhs\ܳ4\!)BI8{dxΘwܫ~,@mU|K7wmtIֽ[Է8wA;gA6oxr!ߦ}7rܭ:`r:RS;=e4#oR$Tc gujs>ufpdkH֊ +@q}tmVC(+qbJ0ɿ^QyԄ6T))"]O8jI$P.yy>3қ"`x=^4X 4֢f9Fj[yđ땢CuKw:$*9FޫD;nS:Ǝ돗QH5#Qt`bd&NVtG:InH3fWZۜax}ͳm^gooy7P6o8፝^ټK^͛6 ilm)mӴy14J}`:)MS'mfL}Ms7 6/y `S`'zA)-;(l KPQ2̋li<)4^OWP'ż<BgJד`3jlmtT͗i=O:4COrGUIR+cӠB u4Lm eWKLo%4{##~U_ beX3Z+,[˶K,elbkN_iV j4¦Nx+LefN91rA+\N&ų> Gt Rɾ:F>: g*&\1e5>\Z1RRٻsG\= &\fSqEr2 yUqmiz;Rs {NnrćtRD[uQl\W E0OW,Xpj;H%$=x`2\\!\I>K+Rq5@\) "\`gh+R+OwMT#+>"\`|4b&+Vzd^ Bcj=*vm,MUj^5 Tb*T (e4ع}: ր`hL*솈i-hHx,WEUZ{\Ƹ툫k| ٱ\cboJػeppF*Hw"\gّZ ˝֌z1r-M&k"؊&WxDZ{"\uSrpF\kz>;Qt|鱝>Jw\JF\ WJ W$X  X՚GWWĕELb: 54Y}Dq5@\i륈[ X>dT*#+C^N{ %r*O%g!| n 3"g: -RPD:1x"@,Gwt=Z6ƣw:YhObd#~!.hL֧jɦe?K,\ nr*Rnr(MgD$ JKսe֏!płm<,G3W:/suX+AW,3MrÃpjuq*q5@\yD+ X.`,bO=p\ʾk>Yp)StEuD}W,jugBJ;N-|9-M'㪛`j\u{C:O.\uS){+W~վȈpEAhprbUbڎ $d;Xp%у}TG\ WZPuj" G+MWV;Hrbqe)h|D"JhprOan7WR18H\i,S-&{;Kٵ,}b4 >|NrQDiV>;Viƨrv !Qi\ԱԞl_n*% < "\g(5񌈲Z!Vi!! hJMcynDҌC /W8W]+ĸ$N=Z}n*++qr3*\xC+kU4"(z+ZWCwM;o[]_I׼J>fb"zw*PM/*0y{onֿWo*Wkͽ'|UM캆JcԆfEV!/7PH%T__^]Y\-Y:Zm:iKBc|-~W oJޯ5ދ2Ez u_vm &賓,N֩IrKK 0Uw4Q[(E ɭʯg{lQ8n#oωe֔FF_C_PBS:|E[w^,Ӣ"L?HI6A;Ͱ RA)i|U P[%2e!>!(M_yk}/y$;?*ߝ/r\b6 2*=:XYȅ@lUHv"`,WVg63Q R|Aolk"s(if( !JPPKBH# YVȼPRtar!S] F*Jpftl^ڕ(!%Α8MmM .4eܩV=ZW,C>eYTVd K]* TY@deTKY]9_\\\)KUeL^dRYWBV$`TF)FS"Ir-E/=& 3y3Ř-Rި,UUdiйEʳuh g֨e\_ϣ,I‘C+̤(햰a*o-IB YV0J~$?"Yﳼ$CS2*U]cvlM.Rb^:bwGr]F {B"\V.g!/&> s(Un1#/ψc}im_%Ĺ\PZSp2ECY9(*dmQ) W!dA Ƚ3Hx,ש Tm;Bt%<@eu%UZJ**A\]jF:4+lըJq$@-ER`33LiRt mV8  )XPGskQ!(ؼ($0D!N&CrLrծZ9 ]-U]Z01ս.0AABZI|txḰq3{ olJ7#+J$W{HV*U2P (y#'`'ʨ`QBgąsA9AJ$rAV 2*e(6!+2]xѸ<@= KH`Ge YLZ[]x $nGfp6C`QQ,TGW>/AXżۆjbPb$Sa!F_/..y'K =`Ye>ZbM=A!%DB>hAjy1j"!%:]%@_>1&#TϺێSQA>ᡔ5K q[O κ$!H;V@@]BZx _3>D[vThň޲A+}Ƣ0%-@GصF!tF% EBiv%&HTyP*Z(o*"㬪ZUzֵ2 ӶI))5:wu{g[%)f^n$flQ54|vI ϖ_-6Mz~plc8ruz|;:Fo:9jZΤ~ޫf~vC/ݥ컟M (p2. &c\ &#Bl\:^ޔaBtE] ]\BW6c+D}tutet^LdNNh?BOWLAt j: ՓQW֩c+B陮NꞮv~@[-(߆@;;KeM ϗۥCefέ =~f׋yE1VL#ڭ7uC:gtYܻZ:%& uu柳%&(R j붺>/ˡ&P:>bՑZyZp/oS^˰Dd6XRKͰi;T ~FxZ*kLt_15а!lL/ӗ0s+ ;}mڐZmoo/ e5+u4Gin Oҏ B~mAkb} z(cFcd?aFnGf64i`C9]aoQEQw_rrʒ3FhQJ*ɧALҏp *+crz0֗!/PJiKbKmIJe ],fgMx/ۮT^aM'nw{nΩ._|A{ɸ F -8\E?%Bñ/}K"$;8&DW씞 ]\=<} LW'HWA}}"a2tEh;v t{ SvBtu) oJXt5is!>f}υ?աOIW5O`)P},]ݽS^je]}݇}ҵڣ+B< ҕ׮d pS+B+ñTJkz΄{7"M6 stE(a:A2F m'DW ,EW7ةϭOWLW_q-_X"O֏ S+K>+B)ҕnZg LsiKG2sv;C8Mq24Mpdhj}4M(e>AJW ]\; 7p*oq2!`+d\%:;]J+NrRtJLG]\9+VQBixٟG|<%ʇ0O0O}E NP#/+tCie7<X1;K1"F;]JgNL)3"ONW@`:AҎwY҇O&]]u<QRܞS=4s 5cmFh`]9(;&è[z 5&hmHJIܻcneiJH"t9LHZbBܓN$J/T9 8irQHreRQhW6zhr"E=< Gγן`uC\)P%-3RQlmhYd1$JIaq6an`,e0 ȱbYR$Ò-ٲԶqOXjU+GG`d45&MkO3gAi0%v0 LT@ R+1[k4^.GXT^P1^˓Z0it0=܋#n]˯;;gwݛ#+-/.,2`NIK.x O:9sLBDȲ u8q\ t:1In?{>H_bNCp33)%~Y<Ҭ'C6ӴY0ւ`W :xa Nk+m% skmEDHQ.+zC1TdW !ccFrNF*rjғ?ك3<3`0>hLCvհuB8vDT-,p> 뷹rIڿJF, F@'#4=3tfi 4s`p֩?矕ԗ?NM)Ͼt(O?wd8.K1x_SRRo8Mz]Q7_/8|̟,ԡ5A_Ӥsz]aS翬˦) Ƨ\sD=p͗ћJ6YR~Z]=6V2X9f\RU"T3a_kܜD\ˬq㜧Z'xugd#R&°[]i<y4Ƿq '2)'UN*T1ҌjѰ(t, "eh㝡C];{&˷p zz mY?i̮ږ¼1[8}"6TsѱJyTb"OmmF{Zto%,o 5,sgƂ+30A:r#e@tQ榛R;-ɐZ''FGFARd0 M 󌞡j:7)q ozH--g1|ᵬ&]ФRx%\]({M\&o5F$$IbM҉4_uŘƭYFa2Ugi\IduDf *;4xg9dIgpU&?JK:gџTa<$_'%0::RRēL}Lkէ>]r%3.UwDJX 8MDGCv)0ػ>O f;RP[4sED&j ̺^\X4 ]4;]Sѣԟ&R'0Ԯd6J4=E''ǓGd^h9GM:f w⋎4WFnT7~hvwgz{ ѯu .&/5Xot<|st:ߌRŤ=i{ޝ97*;M}ULJhu\in!aM>bL+wJ$`yC}5>۟^@wxJwûO?'.ܧӇѬgI6`᧽|4|DӺyҩmvMz6[D]]UԄ 6 )?v~vP\ϋԫef$V->vp|qQKRIontαv!n0`SvC}c :crU,($'f8OɁnW= ng۾yo ) N1x2\29̓ӂQ[1 .9!H+YgR0-LAr Ƒ#]I3sOibGk,~D3pԽ qug+mEu#S =K0sùFgWVTHLY0IHp^ T#h8SqMlhoxxHU\ >G"y|˩l| dTXg4&g}v< &ySګz}esc+ZfBRpm/[j8хP~qzQRb4UWr`YiJ1~cV`{;B$JjA4%r%5f(5)"kQHG$\3A&427 @gbNS|0saHs+npV慒n|K>7Ş0jV =cWM=9i UieZʕt2ۄX {hU1JA5:]5dYB+} RG*BNOT8uWP.vGA`2[(yd)HSNj@8$!:Rٜ۵z-2FK0ߔ˙ 7TQ<_qyP%U}j27Vis(zm\""8Q@wH wE %&+d6ZqG$Y (,!<Jy> 鬏s@Έ&@+'&!M'4%(uN^x]#jR&I$ 1rʎq.- RF#h8Ӥ!E-t\ñ2 l?f%i&g% 9+, dcQ]i B]r,ϵuHP [.G,&dN8[^$ lچuSʝPK2˷nc"0 :,g  ,u)l̵h'lxkcdc4-ex붵7zU5hkQi}0 P8,NJL}\z٪^_6q.A7pNV*Xi Έy+TEQܥ{  &Q$t g$c6eEK3Te!Hk4 u)5&u6t! @GsPbQkzq1qS!msȘ]՘4s@}uY##Ɩpl$B>1t%h:zi^>dQ5-W9ʠ>9)WR%7,@3MEC_Gz5ڥP7.$.ԀA# pa0M̵-wVflY7n}6gn!S֗ϣfArZ~(fö\nz/̼E`_M"ᯥxyֱC/^^-^[Ϥn_ JAdfj{ȑ_ǻ]wwu3|utHZKΌ_ղ$zmw%fVU%MbbpelQsz(,M(Dd!kH G5p%F,XjEN˥J)3Z&t6F%18A :gDB0:Q4MntxLռ7Qq{ LЦs* F|x,y:<]#Y# y,揄 }U F~<}- dN^mr{;ۛqh.yq*w$ ʥ8z;%(!.J pvi* :8ec~]P<} m/7ƫb._Dsj:E3dS-9:NfʇT*IWh}Yٙzg;k?!&t'd t/\,P T G4EɃf j6h$ =2UGvAGGk5ܕ6HNc[Mvo+x P׮ِ{n:_28%7ァ=^CZ]VhӜMb+&6{aw{PFϧ`ڼ l]n][os4}[Á>eزE5[v>yxwaWZn9W{gϜGB/rI̺<~b]o_47tq/\6yAD@ɏ ,RHW4*e>ZRzS()ɡ $NRO8,ZJURiTX=զƅ͌ct\*.\͙fxC=ӻ5>=ǿ_V;("IV3%^*LpF-¸Ģ{-vgsVg69*NJ8%f&脶0QU"Og7c(MafQ^?6kwZ]*蔲B$J@71E&9j+DbˇI*mIZȨRVQ#ׄ"PqD IQmmpvai/gbl #6?80"6XSJ4&G+N,x-tEʣ$59B8Tz#%DiqS|j8]D.5Y,9YͼH;^xq&@G9v6-NL݊m-~u0%K0)@)aQe',Wu]UdzyM Iv '̫♔YUAFPЩ9`&xQK8믶 4 )Sn{s̝1'P~PHp犇zHʮ- Hj1plJ(S=usUEe]=[UT ! DKF{D:RRċ sz C 8\:ͧB"\CL$Ԣ _y"@#ըRX|*u٭DOH ~a%Rw.vt&Hɴ0#b Rn@`X;U m)8(ԑB*1Ʀӌ^Tt֗IP+d ZW+Z (Z54pBp=^l;MLYfXi -J3Zc^QjJbX!0\/uo\Ru.% (7AႳ꘠pPlWZ4\l2.gYrZ3D7Lss$ew 9 ă%I"C$r(I]*g@SjCm!Áې#j08-m˚2Dry#dӶ$#M03f􉃋9!IU+)%D&)PɹeX) 1Br5;b -ˆ`4(KК4B0J`I&̂q`JU+1@Cm 511ə9"Y"68{VA1 ߹ܜ!?үS=2ŋmڜ9{[{L vt#sxÁ_52obr|!v;G~m9UA{/`xWu8TÈY[ŗ_ŏy-MZx"0w&PE10>Cw+G=8po_w:c-ϟBMy{eX7RQ!ʨ1LRUtd(BCUh?>!}_Wӫ_RVe뫕 c\kWOK>ny'6hmoGy,CsL{ـ$ +7.r,\4 SW; آɻm]r/zM3?`C*5I{aqS]S 2UGp X3x m|h'07.翍Ǔ?#JA.]y#[f7)1{[UPܢ L">1ʤ\ ݂21{f>ۜN`r3ZvtĐqTj{z益t oc^+idڻ-?0ѫ,8x>Y13:<GXL5r ϼQ-;3 R8m=赤su` H1:UF%#|NyVjqO ^^$PMa5livL^JSNS4Ģ0 T)yr> dObT巏oؒ΀3\#*JN劬v4zhZŕj]!`ADk*åhyRt;m`& ֞ UF+]e+]2J[DWXIh ]e2Z#NWR7 Ͽp`0g*r-в3iW ]wtuDžj"BU+E[*զtQ^#]1jUt 2Zx(g 2\BWMR^!] &- c f5theMuk+K/֬=` 1-]FBj%v6|콩g~ȱ;B0@$>ycadK+sI=Ӓ=cX%Ewc]ÏUE2_ٿ|m=CEm[ EE}%._*rl=De l "GhK]?I[Ѳ9Nn}[ĝx7'?^\_S|ߝ8=t|׿~wj]yES+X5Z QxVD$_ywgS7_Մ ߝ>'6W.|K7qz^5M}EE*̫nɅL->ImK+? 7{5Ns]\(կhOJwnׇ$+ n//ae]Ͷ ^3̊+V%F\cC==x5G`07Q.-6?'zzڎtY# +e;@k֔u-{5Rv%<}fFFWBo?_Wd<sb4}fq] nٞ]u%zy=L¡WN}k$Zot4Jٞ4AWM_b3J_}JpumPs՚]jM~O+noL8s]gWDzueYIat%z] sוPYtu"Hفt%jnt NJ(u\turVv5BK dZ&+F4g,vM kZ(cԴhy=FW{'/=xrnO/z] hLvJpǹ ~]aOO&FWɮPOO1ꊃi_}Jp0ٕ [&_#ޝ)TipLÍ^+9Ձ6F9nܢ670:EWG+oy ]?2LgWB⢫#ԕJQHW& +uj] ms(Zc!r~ ]+V8ֆJ(]X>\f}{,_͑%J`ְieޣWڜhs#ĤbԻWMk99jwQwF9iZrՐ MjeRKBƙJ(WNõaK BfiA(X.-ϥ#{;|䗸/] 5Jh2EWGzk$&ogsw7BSJ(<-:]E@LY#\k(-?#~yu89*'Cx06~2(qף+ȦgZ)k?S|p4x5(yft]=Qt`>p5+=([tuyҕ{FW(+uE*,l~gpVJh ]WBiii/+f]רa&Bku%nɮRWekqn :CO/+òyޞ"sdl5m-j)){,5unZ̦ GԞK+zB:MI_ݫӫ )\狻'Iϩ/WWEG>6vgqB!ϪWBP)+^~ۏ?Â|sB/l=>xSia] -(Z&Ǩ+GF 56JpEWB7|ҫEWǨ+gx:&+0&&7S#Wܗ/x۴0<̑^r0GzMi<{֜{X`t6eqJY@u+^j̹ N.\X+P_n6.-HbxK KfK BgiA(1^Zt`OIKWky,Fh3 eX3:tƎsFpyOh}2#;RgEW,J([t*>rwOLM5zvOF9q⢫6vO2ޫat%GѕF?w] ܎2[t"2D=؏+ Jh9]WdueIat%Lu%>,:B]ݽt%lpJhu%]s{97ܩ3˝L r@0=73 4wM 5P>ؠ@p<&pEWB{ZQ퀖EW/|q ]ѕsG4`nfA(]bD|t`Ǚ nhIy3zꊣ'=Qwac~ϛ %/wD]#+yRӀ㡳i|kWSh:(̞7 EWOmzgVr0| i!+N)5w] EWG+Á|HWv&+0ǹJ(ZtuΆ\jFWKLow֔d(tGK.V+|s奔&5'mu+^x_zB} v} UYo?{/}񫶥5{ɺ׿otyyU]ʗC!J֯oׯQtAl^()'o^f]K!E[VNn1?}.꬛{?}n⇨=yr}iNO-~}o޼AU7؊WZe_>.,gTGPQEH>Pqv~B iQ7>n>W9R~1߃>?B\AAK2[߀ ݶUm?}{UW?љZ[~/q~9N](ӕЁV P67` .U!kG9*%ZH~"DsZE+\ 5QŨsbe;iQ2[͛woR튥ޝ+5CwJkk5M)ՙ'Zj{D,}zΌfxhF]񌺱kFFG "SR>g@LDk31~E5 -gUC&eXDe9w4 S ȣ޹444zC/CQfB[Qiqxgj]A#&/Si-!Mq%YsFgXsr>L\|7DP(=?ޛ;j75zp-5$]ڪ{H(]bp@\;9ӇľֈSbcWRL1 um zMh^TE˘Z:V\`5 )QR@~I.ڌ㠍5 *E^۽Ŗb#RuHFu ^m\BM˜9EʧV!uAf`1=5)xatYw 4F2hBAjvlt`}R[<4W,2:ݐ[!x\ 4"簾Gmw!V65 "2QJ[ug_*aAY,<[]:!q Օe_.XtuʺWHƅ%jm]^Q/ 1+AE \ߘ k9+9 }"6Pb.ԎѻVAQ T6RO]"ǰO؎z]Ze R5X9M p`0՝6>╤y| rIwi|U2Lb)_c9l&a0 ?~3z۸"Bi>J!{ü++GqCfi &ux"'ur\ubD&hDʝ=+F1Xɘlt',,K1j ##6گzO%& P}$Q'k#[[1#Gn14`1 {dN5djh#Vjd d~)#P byc"CÕ,-`<5 s.|pXeXB Vr",U҈l9i-`y!eќ4YF%5@o^5 T*[f=TF-׫- 3H@ f|ZJ `=ҹdݖ6ěZ4;[ti1;,Lcd>+[5Mλ @][4qE覫26xasT۶KQcEֺYCYkpyXj9@!(i@/o A߼0 |@aRB# X9njz(!.3ds8׶}g\ Ec˄ QA(H`!H:P#Vi B\ Bv X İnll+d XWU.@wޭ2ƣW%^.S. {ͧ̿[t_e9<̏_ZnoB&U~]u\urKl -Nf:v,X᪰K|ChN 1.6 @moՒ9:Vw.9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@_ȵ`\mZ ΐ: DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@j@DOr0l0N zN @ Jr@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9H@!9X8ר8ɣw''гt!b*r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9'Ч;SuYK5m^חMBPTJ^Gi~1[>q q pq h?V=s/Iƥ`\Kkմ|0t ZJi#]Aݑɴdn-,<|_/f W8Fg;QtIm{W%Poeٽːl/Qtb>-Wz[TMPoQ{iQJ[GKQlZW/b V-D 倅snj .G}ʆ>d|kP^ mD }lHzho߾haRNo:r(ԜRQ=$g^ֵj]wUm<%-Q^~L/߇`jhdM]t9#8Mq}1I@V.9'.5cM*ѮU6F~u-=}aڬ]7,Jd@1=ro ~oF֌NbLlޣ;aG\ssQo.лw?oLl-4ekطsN[ pk:nDW ]5z0; eNW+ҕsk1 ``UZˏʽЕwK3 `5 jp`VcRQtgzNsB?׏X';tOJW5Os Jw\{Wݟ]=CW 6CֱcNK 2t%R\ `+CW gC+՟{4<JAtJz˄]0UP hGNW %'ztվb:$KW9 ]5ZutPjItJ͏3=nl,X+^YuSzV `cC Ci;M7M?C2eDW% ]5ZJ/!]9θҞ];UEGUCkPJF"9ҕR!=mp J6誡GOW "z߳ ,} ܧCka(푉A~]qv=NAa ]f(tʣ8gHWBpc +юvCVcR]=Gj;WXDW C֚c]=GR95$1c+P誡}*ka(DWχFI  ahgNW DWϑUʙ&UL1v'V~R!r0R5M:ȏ8l@ۻ,Æ4Lxta̸NJQ1U+<S+<7:PlăگGoO:'1l3`wYXWEbu;֧I.l媘E-⊵$m_*yWVI_ՋM:$tե%Bюyws?_`W^j8MX`C𽜡0YrUs ]w}&T?lL3_L77ê?^~-&OdVuɥ,|%GI /d^9y *ExhsGϰ(].t]aRx6cfMY/)b+R\+ۨUT')| fO /5{hjkA®FM[?w[/f3X6olar>x[}z^x52@VJ7aMAXx7.yhUxÇ\]~G0ۥ I\-mAnm)J눡Y6>ƀbjҩ,ΉKDm^cLD Yq<7% |bLXt́x˜"{gXB @.)LTeE˞I)+:cb,x< =j^L@v@/134< ڧZkRrtJbq%Oʥ`\G㳏a0}N˯eo<96."N})(RP! oХVj^g#ѹ<[ǢBBN"ݏ=܋n Mٓdsw "Ee]*jʵ-vlMY'=g.`J((5qB8{$m\rv2 W}1Q޵ނwpy)ގ۳irx||w|eZ*ЌYkM06`JU*J5S 촶1sD3loG. ыc| N>䓛w!ccFj-DdDYUeJ|G$8#M6̎FCgyv zh k 8]_nVt[kL؝&Z_MF٫޼~y!/fF/g0]Ì)]<mӖG7DMr-''ۏw"ƣW(ryhX=ac/rh-.ONi/Q 5$/eNPUMg*P>ڱc~x|dk;W?=$,Q(u-$脳:T]xTX+bl-!̫ܓ_}Dvܪ*MYRYe+'_ڪ#u.z}ʢ\~JL%t~٤]e͟>lI7bu]L._n|?Y%<_IrLHS<U$],BLe)E&o/wW/ڬ ܡU !_o.F+1sK|Kd|8M-NWvhn9d%웧CŔT:)*]%yq@c3<>S/I"*EךyIf#JnA"\ЦjW˱ZMH w>gs % @6ѯmTHʶS=CRġ(j$QxNtU?U]6EH \邰.E:kk3ʘ `)`kk]cĹ#6,1Es8Z,-!PUy)}v3i"G8_VIӺwG1_ bjQAx5TP$.|-TPEZo>vy^rsrq+%bs3+Jp٧(Q .vKʧϠeY?Qnm}".,܅Y#lkV rή"C*WΤP=CjXz֬N{Zte@Y~ ,X"s58@3 bQ1RDكCs!w[J|pIs q !K+Or,bץ4]ϰT.~Xik̕EA՝B A8&(V* kɲ9 sRD_R)exRLu A[xE'M|2]NydT$!7: l,`Ф,IJȢ;G!CP4q857NFR+MXEhvӬN9q#b0Z(aS"x&H VɃa_\K^8,s;"0$4Y"K@INʖ){EV)TV,Qakj\!; 3t"I0Ag1 R̿PKwܱPoN cU|+N?0WQl \q nXY&_f2Ddy.|.'u]7%0w #uԥnP;%m?dHfD~feO;P2ZN.əVB'n6K~t}28I#[--5%4hc5f:0mvqQ|%n2 LUޓ46`ye%G=sx19 r{MvDQ:+Ԑ2ɴnkq5#%| '04;V\T9bqsx4~ǿן?G{oi=y 0-_%Gpg] M-ty׋Ǹwy-U)nLEjeH_? gT]/9K]rfW<,qWHlRU/D-oK%{"&8 e7]o»;}M, ??d0IqAr AHhbI)NcZԑ8z"]к8<{ai&{t8rN͙W^XT9b m8Af^,0,H%ySu:9*wyΫu%g Sw^&uǠxG|tJ>.=Yso@$wGZ$n5d' cCO+>,|୍(bVzЄ7YmeP *< Q O J `N+jq2l~rm軷WW߼?'_-Ⱦ-p`Gnsg<~u[}Vj 5/jet# zRCT}A 0 үU|鿾|L`٘Ɉg[rd Jicb̐xӢS=QOtG|&MIID ! %&SJeH#I2m'+|aq(|Gr^N_Bܽf+:m]mOh3]ϛs0SFGr+վt0'U> kfR8FT 1C``An[m&GY,ցKdUL28llS+lpT9{t&g}zޅ| B|؊zbOTD|,ݻIjt * c? xE1 p)=&1!w @iHn#$"b#gs!>+#:eMQ1L=Q|4sVݎce]({'alv@g-0v.ALW_ (NeEUOu9rhQ+ ˜Bhz++ Fc\NODvɎȚcaGĆÎDL*bsB82$#R D`>Đc"DgS6t^wBZl >vI(5'F%"/=-*.)F*rv[Ee2Áz&}>]b>Efe]&[sJy^+%u;H.#Ң=Zi'XȘ,tڡHVf hbcI8GQOAtScṢ欲BIc<<:pX(sTYZgp u)0 ؔG><@\ g`_gt6xÍM2 8-҈mP.H^( VE:D$+:[Mg`Ϧ34]U{08L.? _QLBrHNI19Wl T1=ũӫ˳cRH0L\Ytzu?C|ِQRl6>҇rݮ 7/CAsj6#0 )CKbM VCߜ_p¹7)1eGbRHHatZ1\JQ Q rS]VY O꼸HqXpϢ4rWa獿֬4ܤmnAQ{x&tϧugi W5-mpK=.7-K׳y(vċ)߻2Y{^zM[BmVؤ괐fU?Rxl{+tvEg_ήw_xvEWs Y_άj1[{7'EW^;s|(Զ[}<nz{.`fB ]HZM/O7\~ssqLXx-G^(&Z~H tzh,|y8πET< - r8DI,!1g0qt,$ /) SEMȴ*4 ېd&<óSLГh.0p̑Z3:VĹ_QٴN$XNyr`{uTE{[iu6.l*_N. !OXxȹQ+6Y (-w2:ǘg%TN'@l=T-gt:EJ8 1]&#յeL=x6:*fơUݜUA-<=гYXk*7MV-v/@RNׅ̒[,(FJJ5 eƸp Q3%q.JVlwmr]/ \؛5~I 2o(R)dž~Dc8-XLwՙ*0JDl|;blf&aXV÷xvI\s1ڭIǽXrVX[7I'J^) !Ij!(n( 6$kœDv0K]FR2XTx51.LP`8G cT'ڮFvWyшG'5zFF$EuΙu^g6IR@ 2I:qnyDC QZof)Z;\Yk2q@ڍ2oa$^pхk, ߱9-ijL;=-",7VBXaue!ThWz {a +d=Vs& *#F&: @|<{ܣҵN$ٴ՝vd{ `7:l!H/(NPi֪]IB<s: !`LiҙM4YGMۛ[TAZ R츲DR*΂H P-SR̍Yu=JXe4h+&UE_qk} ?Qy`s +- LJ0VA7Z10j.xUg;^)ũkSO$πr7Ms[3KRWOTxa<6jHh .re1X1ɸ 8)@5Oa:`챪Ǯ**KĄDl 4De >4k;C8<Ȉ>yTt韨woHu`wH@`G+=`R8o|ڂsQMUEeM GܣSckKt¬Zx*n5+z˱cSJ6fő;@ȟHuu H(ͫOѲ6V.9@fRq}%U`#\1b10!to= [ŗImJ1ڨV5b kF?.]{^2}֖,-Ƅ[Z~a-u*(]JUYS`m%7U@FG }]-]jh`+XݬݵжH¼ \ziB{#9$GD*0,71%#'aCL7Qlddi]w6=Sc<Z^0y<9!X< _޿{i)Ɉ'xɈ:0 9qb$VRjeuΞzs˸Vg6ey)=1İbHaE(36H56X!h2*c4e.0g7P|"sb:eika6&&A99S1;OR8NP21P }^W ,[G[$7<53XϞ8(ŭ5RFU(ևڊ-1dIPah}10x!CZ1O5[+#YU&1X90}6c OL!yZa4]]Yvxᖙxp׹q!iH\i-sO3匨3;EsLK{oM;3J_+eцmEm{v#;Ul^jQ}|Pͦ44.Ed^%r qV%" M&`Hg,ƛ}66N^W')Vsr_g#x$ P)nEK>l>`NRBZJ*Dr݀| !~~;#h;};_ߜ0|y l9<Ϡ:ht> PS>E0ހtxN~O-~}q''E^ey;W04o?G/?-´^{ϰkgIşa*hz( ګ{üدQ5KC__/(?/C(1Ɣab/_His{[ O&"G7>NNzC {p3 h G]Lհ-,a-M s]ufI+8;79y5?8r`h}ar]K lRaD\2пkbXl(lŃfWӜ;aBKWg߂c`|]w LإS7'@;4q[bz+6nʢbL\L@fP0Kz{Y]޻xspdvKܓtqnҲF*5v^ycpM.[wl!&fQ]|ÍV.ue5A1cwFfs7'ه7xbڑQ ]06-i jnmDƴ5A<=UVe+fgVZ u,=HZXv:^JW^Sb2kC3K2r9Lk. Y0֊%W!=NhM6{n}S+Lva繪T98JuNo[-/MAeT⼘2WSF*2ReTNUvǡgDR`i\vZICWPj-b tuߡSR]`d1tp.]+DiUOW;HWLjeyAtu(4-)!t(jK+Tݧ+qEBBWvӶFi{A. +lT1tp-t(9jJa.\w+KQ^m\^֫'W:'r(9u+/o_NW[f"gYQd6O>kmP^|ϭ%-1`El14pY14heiQjҴ%`NYE\BWVv~QjҒ.gEGXift(Eo]"]t,iAte!CWWCWVw> Q޺E%T95]ZM; "JǛ=wz~@b`#rU3| ]5B]t{S,1vG#uգul3V ]!\NK+DP޺ESUuc)-B̔BWV+a%Ѻ b rR ]!ZygQ*ҕ1G:|>IOۂ $@Gpt2Rő;LgQRnWTK)Zo mKYڠTu-$58ñze:ThSU|AɑIjcᕐ:TFUibTf^:w6Ճcɺj1(^Q]=HN/v.2'aU"W"[y-d>u2_cب{;P~!h1k;P<5B#d6hkjh5Z؝M CWR n: Q % `UN5t޺BTtteDWXpe1 ՝C];8 oXm[WnA3jKӚZрDOWz ?FDWIV ]!\-J+@˷ 6CImOW;HW.ljo}e\J+D P "{AT+ +Y9})+@kX+D)zgpJ0fdIt#hj)fJ0N6]ht(uo]"]I&e⮚ֺBFBWVuBtt%]'%zRrOk؊ oڮt ה҈4Z5-.fҒ~AtʀMAt^S \A--u(yo]"]Y)ɺ|,"ZcNWRtwzy@Qr)ylЊ-7CٵD]ɞ;TMXAt-ե5۪~ %'=] ]1S&uB !]+@jK҅bKF++X)thuADimOWHWJSIPx:A!`ԀLO)ͧ;>t|ݸR ũVI1m_&? GCr,ûz4cI׷7]n9^1Ys=ҮW4ӳq|phqp0|̓}{Ke8̀R|#E$w.N5pթZ,䨖Dn._ ;P L=E>%h+(H%/apJ]H*s$b?%ӿz,f? ^.cMޤ/'K@vYݮh[[ʶԵ/4ni<^?!P k KmVESrwe?dgf/$ ss_VxؖW']ߪlmӴ,nN""1]ՐCir5Tu*}䒨n&Q`l2 Q;,XinNw %cge%'&H˦S&#k9Au3Y1vW q&4n1`l Ѫoen[lҰ9^$)w@jm~SfÀ=$C׋d"6[d=ikQ6!"VdB2lQLWHWr.0t+k+F룧+BetQ+he餈Z_a*^]7x Ϲv];N Bkݹz]T5P0te2]=U҂K{WapK:vb 3]!])k *" |*thkoNWRLW{HW`Ɣ6؈t+++F 6vbZdCB' &b. b+Fd=+mFX?Jy@qN^{;6@lE o dXdJBeN%|°,m1>%[Hn:N%u(49dte+,]:tEpTb=>ҕA$DW اs@p="+FUt(ttv>%|T'CW &]yo~3( ++S@xsu0j?P]z%E`]]7 Ctt(;W$3]])>BAλu!0c+Btt¢ ջ= C|t(3]!]\rf{+ .b&zbU%t2tc{ 6HLN=Ʈa! Jk0Zi(NCA՝Y鴃 R<}n';K@:ҕWhLJ5b*k1]t(m^]w+rᄒG`0~DZ vWa(edtez%2!b`+6bFNW2 LW/BW Hv5.bNWLW{HW]1`&b6zb.{WHWNJ]1`N ]1Z'b+F1ҕFM]` ]1\ ]1Zw(AgG2Jw=x_#0Nʢ3hko HiD:1;9ejR&xiڢIFLTѺ/p0|İt9]1`"rS%6+F >ҕLJtEҡ+ɜ2Z=]1Jcv|O>w? uM0v%"2ЕtTKm{#+NH;]1JttF$DW]\)S+Flt(fCS `t>"Z$]1+F)j vZ%DW dzS+F vbVNTB9=Ylљ|:¬bcߙȄyB9's.fOAZV>jG:2J Q^(S3r$@PGZ|I֧tB w^5. .&Z`:FqrhaB NZiS+l1"&CW\DFOW2^ yXwUP^=~L.߽};,"D{g8Ռ,g;$'ӑ.qS  m miݜ%}E+q ˫3VĿ}oZ 4`2>8Z/pȀFɄVFuI3?:XsFC@31;kz@[CU7|TM?iٜNKs!v~^]~d_ӂ i^;<ǗO%>Γҏa:9]Ȣ1hz~}{~~~S-^΄ީwosyf־ '+=v%PߤpNh-;;YO?s1+ߩ0nJvZ-gwJ-6)1:<`C*nrJ"B7Lto*؆nF;4o`CV]Z%(ɇ~,'瓋rݗbg==mz w9sp9Kgϋ_}[8*?OFWe٤fo~8%?F>Mfe1tXq2qݘhGr0*Q5€o[0ˀ|1C +*u6;km=R!vso]>*ʒv_S"ؑ~J|}R/Vk:~q?}>i~{nt9/.Է|(πzznH72d?_b ,G\7ϼ=X'WI%˷#^6*e̽;DlUI='CIӼ}R1[o> #ֳ|2;6F ,mBzWq[彝?Za=_.eF'8_oI,)]?zl߬" FA-݌wz hu)oR?r-Z֦n(EYbEy<C(>|#25twNNUiZ"41#,t̫a!H{C:^aMN$_)fnGr;MFw6!IrbC/bMuUNq}|[o*LfQz -+YQ:cC[RYmO3mh枖):IAKL$b17tL1 2SbW6Lֺ|Ʋ㋘ة&>g홉ڎ>!>Y-˼NOCQ{ADym9hA)wiV|z}pR Uw;M0T=o-<5.̬q]NY߭xZ z,. } aPT؁r+I`勝0l[ ,X~8i0߰y+(lVr0-k+D P'uje/ U2B%d_v6MNڡN2tӀ%O~yH޷EpqSP:1rVԣKǕ ]%͛ȅFenCmiӲu+}r;,n'f1~cU0bc)`scYzlBCЦ{I,;nXn4^=_͡ݶT5R =xo VЯ;|kеUX B=EQzS+p]mi[mbZ 4#˷v%vXWASݟ.J mkoJӠT98ZZcSY2g^ֱtf+q¦‚D|kVwV^U;{;~iJUK K mx5=O i+i!y w8jTwqp{iЦ;'$Wq{"=SaQGKI.R!&2S1ƣ\ $>i\>Lqd+/Y{?|[P Wz&âMNO`&lܳA+9߹`Fwz6"؎7hA|_\5mUf=n왆/ @tc6!r3oݦmzXJQ\3E4L丝0|d@'@˴@) >;e!Lܱg.B|{1E*dlXۊ&2PNiAZ6 R2}wg,xRC=/rM4/n( &S@$hҡQMх S2f&@/!vjk=|lm:}4ArnOxj?JZjP&97VX`Ƞ)~MЊVA[yUmW%Ş)hVjOZxXz3`g\*gB./s-E3x݋ݫ|b(}ccBAר9NQ;qGւ қ#k!U`Nlg.'=Vr9-6E5x0LQ{1HFF ҹk45x3+4hZ.iY6H;q;gB.rTH6b: L04鴶9!tr4%$=]2q0hՌS79!2l3Rz6Ӻ92 6xS%p+nD\24EP]Hc 5tjQ?ۨ/_K+3WS:sA.\ ( PH9AjK_ްFnK<0 _Gժrݫ%̸~a*1 㧫oS,h;MyzWjU0&o]ю$QK?U/o=<ij>~YF[\1y nmN] mE6? Y-7ݬOZǠ?sAߍ_zvi/DUBVEb!?ߺޠ?ƥz˗R̛yH5o=f*^L0g`bp̒U٫$ 4Xfb`V,[u 0 iwccL\W1/p +#2?c084ߴPs<KTװDWW^mr}FeI8W㍏MzGoGyyİe~ש1(m[pf+DtY`1{~%לktr%E%l"ιO%V6NK*/96MF"Qqa\RݠE17%NJROy6ǩ_\~]wu^`1M!Jxqc\)#缮 QY":-t5V˲H+U Kں^87Zmiy]緽M~Ka6yBJ셨~ +5{/Fh^['~'V8nd!J !Y0 bHRxKyϗDy?vy]^:d!gu V:l,D,;@–8QZ˰sK* 71șEDKG*$H/ٖE._]ARո)3VU/We?T'{md}Hc]|=[?珟|_yُ[.̭_ \~Tf<^+LOHA\g O&|zReٜa@s*[Q ^G>CrJ_5<\‡ L)dXmf'kc84?]``>\{s X,,Qgq rsmYm>].aHA")HKjG|^UWtH I=<ΧId[enϣ ^ NNLYETH&CC 1A WC/<$ym*˦X]+5 cQYqCLqdIDz{BoZxBtzG7ļrho. 3tA>Te]qQЦ o;b{Ѣ㚗\<$\PܱFR>-lZ?qQɭ Hy0,kZ[yЎs";EԲu,Lq ]8֚3n7á% D-@I!u15ŮыwI$ AK$td8{&*)EwMsK"@ q3%ny)d-LML0)J) ɋu%(KV[=Owݕe#_\RcS&mm ɻZNy/0]U8MX=.3LOA'Z$mL) j~̘ZM=$mfp2p4|kn`1|KDS  eհ4eۨ\:uHѶw`U~Z[S`$ N|Zb8d]m5ˬ]*+dVC4I$REh0.KnGd È3Hv 㖖#ӯ߻V2Lij򮜤C6+;FmEU e1 aؿYb!,k@s}m-x)GE֖'ў\ ZJ oƭs"=BI&hmZ dO-7j 72, roNt[)qAv7K!L ngϫW{)JXn)ES؎eFQ #! ]ehxeT0.p-@Q;cB@ Re?(OK~TeǕ!LȾK'iV7J^V\&61Jz7dϳ*P Id"M涎gӕ@_K+$})N NV !Jd nnϲ ]f>9&4R}2[}ck}ٟ !- ynd_myYjK( ;BTD'}"c]j.= T"29WR˄i?8w+Ggi'I 0,Y`+!1c 살); >2e[͖SRxpҝX*z M$TǒUS*#'@Q./*P4=rfYke|4hh=8=zf8p-&+쫧o ̡]s)T0κCicP~iM|貇x3#H.Հ"m5\ő i^PcD01}1 ܨ@.*D^{?JhY)cR0ug%Aѝ4I2M"ޓ(lIlJ<َ$*7̧@؆[MRp=~) Btt- BF}Ng7r}<ڹK< HIҖ)~퉒Ēg-=pG6"/Wi)&ъ -pjwR>L 4WVCҼ1 &N( .% ƺ >glNދs"rw(.xv5L;[M?ISB6H>xxӮI9~Uذ6`U'ӖCKZ@.JSL^etkL2Ӹ̝fV\7:n{wOfR' SReJVh^xHC9S{ChɷIDP{g9vaW  %$kU9 eBsnkHJ;;ҪSF^Eyn ةWnwX#j2Q).C@*e]Fg^s҉6kK&/V+m}u&4äcBp/9Y TIIXB$!DTzE<cQ=c3 MJ)a{C@+ iQ>)A)c.G_md h5W%0OeҁT[y߈с^בЇyB*AaXu^cJhߠ_: IaVaz_HUr&e=<ɣ ؚlqv vدz;nZ.3>I A 9@󉅴oSXE>;"Eh*6{v#l.{5bbK1\\c#=m%}i~k< kRj)-5x1eJDŭl>ˊ&@\NU{9=l}V}82Lx hjneAXli`"m=tc/xUVh0bR]cXiL0 DuRiBO3 I4d*@&x` SRLv^Zz}=*=~1 lqqn||;;v 'W%-yQ (qP )(HRn TD& 35kjb =Us1聭v֪5? cȧ3 B)`J *뗮;Rƴ8{v֞uKRg1 Qb Brx/8q2^N^ꊅGrpD}߃+eiK y& 8 YQ}P =AFzɛFHi;PNb]L8itka}7!5,ben<$]Pz7θ(mAt JVB8¶ <<#c@?|[#9+]@j6^U.a eKBƮ?zn1p ^x+%>Ňn aӌy' JPq5jnyn r:ՙj4?8}8* - »mb'zfrK%Vր*.xHdXs3c`/B۩5&żoju R |襑@ B4!NhbZk/FXp3Q3˙WВ#)?onUiI{^8%hw $R엕8@a_(82xTp#1G VXɽ q4c;Mekwџ_<+}#q*pX*7 0ƒ5WH}1e녰졡4 h-ި+`|$)!ėcí:iQe)̛ y2qOC#;vY\^`zR+jg-$#Vv{&t7#}l!7YZQ8, &;˿)\fDM#ȡsm4nuvn)p`纸XU9}J ww\S>J^t"_^ 5|]&Qs`Dm/a~R/zrB ʔgYЪq2AcIOͅfJY ^Ѣ1H4|uI2@ ɴ7->HB{0nL 4]0"鄂v `|Puy1&I,B Dq(&˺^eNbST}vNI+|UxOAP h\zrN7 5/6`LbWZEyiߧ(y%R:TUQIݖmN;԰D`CԸ$` 0,eAޒI}WKsM$.\,}ƴC)]^iTjOV<hY! 3jA0R 0mVaRa^Jb7X W:X?n~[qlX>\It + Z3UQ!A>ǢȐL" P#,X/7]"%32/FX vxFZ_/i}rdj}sv$׬cX>NRuHscz3?-kz?_/뎫fĹ=z!<+j_Ѽ,ޗe3jwo (e$d-5v*ƾN:첥g[Z`C_꧎jS=/1 Olo2E¯m&60An:L{|f{yl~rHBX&izv#tX$\u:B*欎%pK>uSKuH `ڌ!QOJ-&Aj19$%%M 3PT/iu(9 0d)`nYciv8tw/J`Ǎ42Ӹ}kduɿOԌ1Dk̫z%YìASh5cCN[VGsdY3YVhvD#VCL].\M#jIān Af0Ic[btԊ_y$@Eru_~ɗ8FE?oiɢTdQH~]X<$OwӲ[q3mʑߖ7yH5*M/ ?ͼc7Nj܄7fHi^`,rOqE,i˹s{U>?/N6b4H^Y/e3w%0v7wʍHA:6Ų$ι֋T"SjךHR1I~^w*E`_eB.h.uk76K{8ly푶N-&Έ.P31znV0NJ/xVIq JiwFnmJj?VScp,CKςosh|ɃwW)3+bl9ҥm=\0(MOSZ1k bn[Zu:^WM']f2V VgZ:&YgǙ_%:*ji3p.k5TDmQX==E߬vhPf3ir%rYT`Ùړ&,i} UO y++?Řf u@dGtNMV@0R 8Fj\thor10V0OKT=Qd?}ˬ\wB2]=Y(cEΦf)(KɋY'"#&[+57lY2HG`œ28' ۻ[GnVn8񗯂q3<`SjEKa=%h9e$5i#s\o79_MPٻ }?)Nb~(fXG:cq;c I^H{1^lw?,J4r oFfpn<W&X1i9 {ֈneZ_@+_»lOTh0 +&ڼkrHܱѠ3MFBͺIyCޠAAWM:_wΣ=g_WQӢQP?Ju@*a4U7i-4ySD"f_eFx+lwrҗ<~R{~u 4j,XHuDk8&E)Ra.Y)XfqLxǤ> dIֲBFЖI\+񩬾\32sa_}𷿫@5AKc}Y;'z#֛#/;3 R4d!e[9o@j?\&AV*J$+If$&Ps0V?Em \f,aH) XXPD"MeP3FFd: \#+yȅ(5ݹ@'@/7?r! {)P"o;CC\h;4T:ߛ>JQDPG% b?^rEҥ+rZlPFs3u)hȭrN/Sr~bQW24`^=8 :ŝ Zbl"0"iEtʟUB 2N} NLtR+|j)f0Un<_J>-yf>Y;xĞr;Ȯ:_E &; LDxT.Jq]Hq|\)ƾbq6۩^{r{PڡHu= T`[,^4eL4T? $|ɺ0x(TwtAQn A V !yc7/_cQ%ft! b$ cHE$`ʹM,-bH8h >11L8h_~4AYZ:ìRd#pG-cWBw:TsQFk-ī٪A1 ܿ?%p1Fcr(֌"#C^EPu=j`E"oS8Qa}qއ"16?BXMH CaЎ\c 3EcWHoغN Yn-zzyyNא~ZHb=5c{A" kDGx@-f ^t)7}2_-ȠAj{+}3?qu%Y:^AW/&~[}1:||bCvPfeiZt&Sxb8Bm+o`BPQ{4'H+Pq5׌1\ly?YsX߆aŨ-G^8 ՌShWKPB}uk8s*⠺B!Kk;1F_}&~?N8* t&4r@M,m{R.cKހr̄![ڻwCO 9:FceD.I}C26YBQbMI.(b}ʃXpEKƠzݸOk$ a^m+Ic _+J;Փa_#'IQ;x㱑qc|KkskF#e ТQD0Młb.3lGl^.AD!X2|aEil}"%xaF>P56K:FlC4+pzKOAI:cWPpc]/#19  syFؤ&-"V9l]4%z3"Ԓ[b9 <;w$rkp\٤@,#4f8q^^8v s-ϱ}Q|kck(Q.]'2}!!Fz8S Cg%nmz{Q{p|1p<ؗP/ߎ FiPTF pl8-F~'ʷX;uoTpr@ىrF(|LF$J"(6[4!MHL+pSʌDVe}0Q#w;,SιIFsМ6n<ȋ?J%bp$S^̡P: }ڟ$o"YcOI'7Dz:L 1!`ܯ`$2~^̝;ᆆTɞ䆾YswQ W %Dy6˷#pl&^ T RzA8\nլ)YtgV+e$b>ϩO__b=ɭ]V\bK>&IW~}w͜gAq4HMd$FJHYj^­Y 6[I3TAVT#4̘dieHhdnT&Il>*JEYYϚGlvKkknDxo?i$ (QB-&Y@aĄ1 O-&B\Gj04;*qDyM# U1bn?=ϘSXeDgVX CA(Nb4RX#m\*KPb WUȗ\Qſ?߮011Mq{>?X2ւ0O7kgJi&S%ΟƘYuѤ<]nmtO-%rO5 8 alo])SLz[rrJ koO .1>B|w:_/, )d7|Ҹh%cqs}>O{n )$ITzl0X["Yj\@/|!BnkĚ36_f`9uyX:%-3R"pXNcEY̍]XZwa\ ~c%hM4d/JBo%ψƌx{9liE?W -d+[ko&nT\fl9ڙ?6[e Lm\ff6c1yNncq9m6M29KOv8+9=LS)V)@U>$5:7n+BFmއ`ѓd{0|XDaK*Ǎ.( Nv=|/(Ztz+z)|Gq_hQpïWMqq3AR13 9ځ+ĻWx5 (HF!mf=Pˤ}Vn{ˡgkL֤71IS%]NlWN(Fd/T=V^^L,Sc"#r͏<ȦRBB1;ӫ2ݓӘ4c\t'`G-ELkss~zM4Į-<Hۢ _t+pXV+:g@QT~QbQ#HgA^FC7FC+{8cU24_?̗i6j0UmU~Xc>Q c5| cjoO_ f oYO-'Srh\(B[/z: WA5㫁T2A7iڏ`JhA<R+#Ʊشmֿ G,vjrT1ZF~]9k!+hB18me8Խk\ !-cny8ǎNWJ:\E r.NB4X?)02%aY!(T!"JR[K5NNI'tWhO I,uaP Ϸb-LڃG /Ӛm?~q\U7EUg;G%#]n[B AlInn75{4^X@&[wPJd",HTs$Z`TAcUV\Ymd,U,>N>t{)?mOZbBZU3VjbW5Ҡ4Nw HB:.h8AMr+hH:ViyN+rv8V_ߋp%"YqĬQ M |!)jJU(1deQB00yp45m #'9#\::,ךnn[zmޮ^01qȊ8C0wn{Z4o E#8M2aDem0(hGQ.۲=~cs'{"v…RWwHH /} t#A=H܅@4Ek{<ԎEK?|s f'ùX}D]|\f|5}fgYY {#$ .1LFZN[2h-cBJA 4JF8;CPYoQ75%t}ux|[_dQf,[":퍯^ژxVTp]>Hy5~feg'}~~e&qAp kt)1 f濾wSOgo(k; X^Xl=W"%%!(YNFhhw]q;| Oc$YFrbq{ڊL71|>U"?NFɏpiŁ'ܴ"E_]g)a =]r Was«٬_ߚa9*=o3՗^7>վ9<мy|{׏(XSbvFaOwD39:__, gsET b^e \ܦX0>^^r'Ø֜/";?m+r wo+zf-.t)"}K=ƘNpÚQWr Ot]j{r}Q,}T̛TqƉ[._c!щ7ek4>ՔÌ3Dӧ+:Ry2`+)iY\NQ uPI5c. !bX&3E5V9&(z-R!,P?N@ewhYwJ弹RaRm %s$ ;Kk =Jw%%R FLr M{<2F.ݧS_F׳Sƚeq ij8A:v6%bVca+$`#B $wc9w^l3Z '3!:ȁy)ҩ/4,I%vyYv jO9,} ;Z#p(fleR )Ƅ%*IL+CF;_{'~Gp> $@ v{mŇjƼCvv4qAq*DhS EUDݎZgD'(2Ьs+:!4(uw0W0i=y&MQ1`HLޖ-*zVqz;W]hq^E*T=i_Nӏj&M-ӱAyĩh+45EuR~!El"I2I W@_NxJ($u+@/%KpxLY{L3T˭LBMeH` #'*e>V:6ܤ*FCr>tj6G;5JH@x!W2%xWS.\XHe@'a11h)e71IdGE9Mf@bD@`A`e+I7PtxL7+AqdH6h%QlhIJ!3r~{JWj>bnTcI`zʽkI E30̸O*h<6yko ju }k!ɸi){zA] Xee&90E38*N-L`X) IL6\\!,VH< HQpFrnK\4LXqKT\_߮)V Cy5Js 8*a<ČRQɷ[/hz{ Zh;1AϷn #I8)SY}m|,}s꟞oò*GR'+nD׺!j7CW:-B":4N:#-Çu?0~W=d}Uy`MJv!YAqSIJe',1R6>$ e¦6w54S= 1:l{jCӀ_nQ,1YADp.;TFIض`} &A#"%QL9p$ ߭Y 4a$)D)8u\e8u.Gi7QuP(qVGNf'z1agf9 \.K nFDŘ'l<9Pf)\ÝHz&ȫHA Yv#˄CokR]9%9߾Y4CKB?6>+CHrsK뤑<JޥMW_ e/#wtS:VtPØ"=nRB As8>'(ބ,Qd7V~ݦ<[s~Pu>Rk$bV吱A3_9Ȧ|f_K}1[^˷KO WWJQ]敏MuL^cx =bтud&QgZh2LmѾЭbR*'q,A|ȓ^`oۦ'Lĺ-S1%G|T]yj.PA㔳N csO:Q,R"z6 "ڂh.#$AF~ 5>+R| *cS}^*n50'&=՞± Ӣ2#jF^Pq s;v9gR"B8VaER#TQi ︯BzkplYT$_A#5^XvwM {Pj}"˗g&[5̄* vq¥y^oVxy5SsIJR#!VxV#֔dZE@rJXD[bG>7aʃ!PU8=k`Y6X4-Dm/ݷ@۠Vٰ'$z/0O=1[5:;ѩqgVx|tELX%h>񧛨 im#{ڃB$|X & Y ^W3~Y~ko~l7v\+^>gs3I`4FeG.42KƱoΌΤԱuHYC d)iY cMYYD8" zc/#zb*aWH,~'hG"7BJ~_l'-΢y>/OAu6?*sн7-n߄~"ښj\ک˽K7}{4o%yoh[4p ]Qo\ԇh~\4AW5y C=K8LJudwl<=]YoDz+b \eKD2jRp3MQ9IwTuU7['a~ȡw_/o{ˏF Σ; ihRK o/xǿ_X 0XXH N 3a׫q%O-#͉8VA!FkR c&*?PF`!ЀH5;m B?KŊ+ډ„RjZ uGtE2hh2+--TpApcu%A cJńu`[9p8g@WyAWǧʼ5& kb'?b'2P=%eLRzOL0Ty%xS[bQe)QR{~'b{XĥӀ Zn$gNﴓTJt"d.s&O&&+X[(X$ltlT x~-ze{&R|=G=e nLh4hPkO(!p&Sn7No i$(qQmwT3KEKw;ݺAC4m(|_=] ih24"eJU}VvmׁuQqYŕ3jHOBUϬ18]}W ,;Li>IK _syMCm:sM $?hjRr6zYe9gRFΦhnP)k "0ٗp#KvI٣o] `{֮7#OԶa#.xWjW3-e[ƨ')1ۯSrbfղpss0:k^XCYR໙x;C $FZG ]9B|+Ncp>=HBU6u:$6cg狱Hhf.|YP83#^X‘F:</'\ל+o`o^|:SκCX!TSJj{ˏFA{=h KXɵ.(_=djR%eĖ5BZ`[plaIW~fr97gp-A HaLZϤ$xėǢ[2%t*+5L0ǰ,x{Z-()dpe`a\j$tX3݊JΡPCܠ'F"[:O4Mvavo-sgI.g\ x.]"h` z`DXɔ)!<%Thu"36JpJXAsJaE\#Z xH8%f1ɸ˓lŒu-q>a2hXR+ⵅu<91 P;9q`E!)q5k3`Ň~DՏ9rxh$޶s22nJaԵH[u6J)}ɐďu_ Xv3u]/kѓqF;D՘0gh:! ShCSuޘj!-MH=z/&qUXj@a 4sJUS.EQ+ѡ v64ˆ Y;/DdOBqp̠ }2.o5H`J˴!Y!TSPVxB ;6q}\ViNE".w>߱`xO,2w;ݺB<[q౞B73((dytj-R7g9m K쎗ubM(^%/"XI [~lMg5vH*tx -/S/ aFE-H(g<* hLँsf#xy<8pGQ RT?lNFBg7œ_KZOb_Gُ/Fohľxd'7஝ۜ&»3GF~O0;ZAw7~-QF~zkt =p*Kqtv9|4)G%a {ŗvoKkׁKEn{|-D%KTgԊ:lm `OV>7CbGvl$7n璛I2;"f^~ູ o,9tUHjUޭ^OBF~f1vsSi5qt^i0śB79hs/ nNgMr)]"jxf"0 w.@ARl4`cBgߏ/LqZOe; X|W.nm1=U  :﫲~ɇi=IRTsOԀS 6O-]84qf=؄D9@IJBLPU7[U1n6}F (%&4#:np#K-Imj8F7MxtP3xoLxOaAkdy|6Q4{^JwXg~Ѯ=PC}nrtL9; oLaWL6 iݐ}pOs#_LS}SGP";(}S0p_CiwUj\)NA64nJ9t I>x\1r\YcbC MQ 4lM+qb4Mߑ+*st㊯&qNӽז9e b-0BHFxΦO/ԁ\(S,痟>y~%惧s CT BGb CMIc2U$YJBŸ҂QO[Z ʖ:6UCdZ%q=+{|SHJjgXq>1~VHYKeqή/ۄl4%x.Fr Xl b!92bKE{bגe"lb*@+ŘbN0zH{n[>ZY]H2f~DnŠt DEܲd !bq4!0r0'皁!BL<ϯf'ǜ nqbq~s`|Ι^\i!sqk$em`(EOrIHw{Ϳpv~LŘFb92cA쀀s$|XɉːkPq1:>13[b<*9Y[4 GD` <eK舗[FY'c}{{J{:aSLJǍrFH{zYX "ꌆg_ŖrgdˍѬ*A$iÞvYAS +II`E/GSʌBfȄ" QV+ș~>t܁dqS HS DԆ̮TrκН)cx^PUY 3l1+di [LBdȍ E9i%Rs]ʅגIRy U44akmK[7zŅtjp)}5Gd!XKv-,>7h{+1 !0>NF OT5̍C~j ܲ<(iy̭gnMh ڑӜ1-ls{h0vuĀFQN=MXɬ}3H*2͜8@=G7 q3>]Br@-58_k+-< Ht]Ր83%agYtf Jk5|myDkA<b1/fǷ*/F:F9C&jܥeٜ AiԡsXz\DAc$QU?NDώ_%;ZrWiQZ"]_aҘ~Wώ0}BIWS=DɃ*M*};q :dg_j\hܡ J\t(kR!(}wcd4S-v7JkG)(E}/ClF{dt 6G=#=~9p"E}-gN%N!G>H0-o&,B3*@ckA<P"7ϥ1?8ӿ8U2)TN%+?֭fdw*7R*$z$oK3h3Q`9?+av{ 4rٝMpx\^t}#ȧKD{Df4-aiZRKŒT8UI1fr}7,3/\\JxsY49O6[R,iF)z!s U6F'[uva[=B(4y}XO5_MWkBq;qG2)WFF{c:ux4>7/riٗ/_;G'>tj4e9;T3t-c!uZZ&-X^Y#x ǨsB\')E6Ôq/KH+) JTՍ26&hrL;&w 5mugZN[qJS_vQ0sͰ%brC_Hq.kv~+Y+2ˆk0u&ם!܍6Ymp5^|M?g1Z9~ErXmJY[VȱTav Ce6gu1>R1Rk- 6?wlyf)l(Fi嬫kwD;m*['lPsrq4B +i煊~U<$Bg希3<.9B5ttY %M]^kO]udvU2 AOˤSi ,ip_-_||>:9>1OA{|x~dGn<;NJ')~(iʋ ЭϪ.s/㟧ln˙!5mix{p%?>e4b.K[ڱY.+m%“ DbXOO^6c-4ˑ<},tFx;枠}Hn c]L'Я&d]HC(L QWk/tɱ6q;3«Yx{8Ogb:s>:_j$ 8"/XvL4YA/D>| ]9نTԾ]?bGTF>3D\6]m8fiW+dt*)ex+-)L7 D&%ˮ IqϐPf"V)t213P>Pz}2$"w,}e=(C{%&)aG&LwY#|RQ2Z}4<pқT&X[,8)rl}G[؁V " ])d>q)GNcG=2Ǟfhk+cC2$$kBC&O8{}_[Ҙ<0ydҤg!_Ne7}r.dƾirMs a%tA fgT̟_mbꦋ'z=O-ḠHMrE.\$P ~]d %13s]Do[[Qע J)9j+Fݗm:A\ Rbc{7yXzfTz2̢?Rجc7m%kJѣkdΝ1~B@@LL=Ir36~%"Μ}!641߿h _iFqߣ͇XlmxP4sdu'=2n&b2P 4(yZ@ai3'- ,mҳ} M[hetF<&Ju9f6q9*Y85@1R$fbZKe1\eX-x_<3bRf@j%:VjV '3;Ho/U*dj<&ŔI'/Fm C6t 75 iKU S1k/bdx#u ;:  mj/  숑8a@mԕs)h[#qrc&F6*28w0jfT͒itk@V&z3W$  A_Qryg ą  7}茱˘\֡_LYk~/w'; ^sYai5-},T("*'oFgc)µS:gf/7=*$일/{t=[Kɶhn؍猙v\<=n,vJ0 qjgfL-^[O J+_s֞cԦ,ڶ).zc+^ȴ '뇼q?ެx9U6,V32ѸB44$b-}b;~>r\F2UZi r*F6(ǤJ j.Gm5'LTHK V_WɣUщ]n+r ~w>_?ON9)0j1~ᦟx&Dͦ(y;Y ހe`MZ5_,l,f%@ud0Wέ=c)biN->YE[5ZLa֥LlZq./Qj^v/ Vj{;@uo_ #ߥTn@V_߾.\w?}Xk~]yʇYs׍X)|oI*,qKP{N]DewvI7 VF5Oˇ/ɌC)h_$Mp )ԑ2(0Qb9d[ykob6)I0fPsb#u::࢞|)':O)yB0%&gك3.w*Ò&T !WE&cnb%`WMs.Ue/><9ڪ{unw!D"33XG#hwVAѯhcJ6)6~98fD2gR&"n'u+< 2+|0ʭLp+|hV{SyОvn=2v#m~ G{ 5 LyLOdk#9(%/F2 cl v-șkN+JT%Wr ?AvkaLyzDn f/F%VFGA ]ᶈk[7g~mɅx~`Ol!52 æOM3 qzNH{fM*ᡙ#A([8h}7N8ݨݳv.~+v# =1Dar6jr=č=)[ntF92?4s#a\؍mnƾBƩF.uteć%F1Owv;h]*_i_~}G;xL :#몵~~z14wc_֖ghzm7Iu;Lcg~So68/.wǯpħ߽jiϵ-!GX8/a q(> !}3z荐<:bƹ#Juy40!a7n lP;{17sޤ"30nnv9QokT^37& ͋1lnv>5n}1/q`/)c[G:"ݧ & U'{G}Fv/d̬#d԰G{t%$LEa}9"吕T$ ذXI 4L33 j_~&|Y{}1fa.3Ɨ_Ƥ1jW+:F^+ X슎c0qSEՑSov`![n~Rh"/vLVmE4\r!! ƤZP̆-էrtO XŽVF?ک\C$qs?x7!E{n?$wG{ﵬx gu=җן?mNZͣې). `Ș\oȃ5HK `& v,A0ECiTs.9w7ch$зAL2 *!bJ0;\@5ZpDqg9⑧KnhYe׆Sa׋DQ3@[\}p4/I; .t\ ˡ;K^Hҙ͗OCpraê Faӳk~7?= *]|QG7au8%H:yB NlP=sd0JBW1B@ԛ']:1q AF[Aj(~`3 ID#na齾n퉙d,H07t;K|a$<,{ujS%/zi)UquDXu'șk BVi '{BBSQryM/[{ s*W6W_%"iHR|,)83wmq8L֙ݪǭyۚ^cǒXNɯ_%Kn/q*.!HWS)';~J”}H)0Zj)-TziQ2lLń(MQTBL\p ňJXSFg

glM@Z\jC@ RNz *k+ Ǹy#bRiq_(S o8zv_ϯowM]$z'D->PXJ)qSQ1C(h` suվN |J^KͶZ9i%. rqqn#6}W;voM5Pdnbڍ xZy } m5뚭 TH\sNq[ Mpf q~,qDO1 K\ gbPpą~JD#m+;FqyJ գI$ϳ Ŗt[{#bKI!WLAހ:*ܳ{f@*o BɅ:IbYvL3)E5_ v\Ώ{[fKK*' ZRj,9Ghǟɏ%N:{ b[G/[˱Ɖ82ڗܧ2s Pֲ <ћƯtirEd1)Ajxr$Ȩ)\<.)hlў .xQ㸲KCtQ6uĮe'(<\HsqGJjˤHӲ^0' 9D+#d~̳!lFRl7C\ g (W 6c$<%\S֔X_)iGtp\Qߎy>zhT*I.*[SԁYV )O"-Ypwy< C[şֳpN]8 zvHV£_E>h/&!ۻl$MT>F?%UȣbAl'uTGnUh58k*7dP-xbIKF]* i2շ>ߪő2kԣ[*i%ltR3ZkXjb14mV8BSog!W^R diiZf2;U~͚H%TbK xdf7r.}k̸aj|rz1SAvkMY){N)RyT bk+A9%IpI16pP~RU:CֶK%KHgxo@ac $5X!{~g͸u'(z`o zţn1go($z(gz`/[tZ8~JU7k7ol LnBG(2l I~K4S&cc@U(eJGcHm\/AKiC}phD["KQ75gLsH~' pm;X 1FYN7*^\?JiqUݙ0 Of(L $4 cSl ^ŅF碱g/(k.5K&Z^B^ W&Y籦5Q%UEnӤfҨˁFtdg.M -lTR$b]LW䯏.zimOWK z\w'*.'x0;VAS-2ԧЬORjLpẳNkx%X z[[w6 w~0h=gOᓆN2sx> y&]TlM\h8f>Gx.ʽ#I<{YP* 5۬Зn2ɞ׋zCk̚sL;Ofp "P&7qI'U 2 <cF@g,G^: ܋qcOK]Uw}{= .Q_!gӡxg0Mpyg\utъt:\M əg1ݞμBޤB@Q=(hkVǚ1#VVǓ/LAmpc<=V+L- D>>Ń$gɅ x1F6w}g(Fr[ۑq;_q[FE߻wFS_yutch0s~X)p,!~̳Q̂ @a`Ͼb gj0]OK6ՈnoڹqHAJb"L=Y).XEbcBn>g&"YBsqyݲf.(fK0rrq4 'iN9ۆ![rܒi% ET\2-SOեlL!4d*/XM׫|wהM#cQeuwWU͝bT3\&ͭOςI]8q= r5ǚ itlގE]p(7\ pݽ}YR2e(b"%ыh96W;Wl]zc7^K6z?)6,V-#OLaU] I]U,j2]H=#YmG؜C1fywM(?hjsztvE9Pe,` eY]dL/ͅ&OmԡNUKНB&IIe1K]"!i-(@Dn+yYUٵ״8#KS dLjJpM/znpbW:Q9aT:HC2XRѤ,SUₗ5pM{U*m0& n̹᳗5.,dt[ExZ\<1cd=-i"[-j,;T(ڻ9+WrNU@UJ'k}w~,Mo_{X0dbzҩ\9KM쬞gL*D#kVv^_7g36ނqߍoU=5ZyX1BF+h%F+wԜ79tīTkE+ng{N8PXcI[#q&JxaџY>_$j]7:}WNc B`ya-(q '+I-#oZ_]h"GutQ}@|JPz F9;o_aG[I1Yѝ 9ĸLL\*_o ~5[ӈ1LZypEM`}t@YDf#< "W:ҁê/tC#g㛶93DyWQksc 3WHz: q $Ybi܃I#َrWdNa6.`T $x)tR?w%Uv9hR \v-);V,.\&S~b 8#.J٥!tЊ+iE9 )^% #4o]5e-n({Ȫa0*c]^!|Q;M3ZfUUrr ƨD #D#,NX-fiL|H.qgzc睆-56죁iͷhش UD=k%udx`H_|VX/-)*Qsɤ_fp/p xHE{`'=EwkC($󻎎B̬Ϊ|RDk.d^Nj!~EP:N]j;-ra Ҵ|W@)b~($T'ŕBO9+f]$@zW_u bY$Sh4N~YZ7{LNtNl|ޭw,$,Ŗ5$KZ9xt-7`j:WM Z0~W0|0ח .o]RI)fZf8Yy6lՊ]Z@z-D5J8 q:rjZѧ{h>.5AZ܅"GC2zm]pTKeI ? Ǩ8޻5T>~xsYVZ*`iTa (b$9)P??C /i+B&64Y5 Hkז@sa@ )hbN1.1 ih&Ltt#UgbZ9C .Ȟ1.f3Zf? qZ(\hy 3 [J`F9Lb!V)2_%vWstכ_Y,65_nʓEpY'n1u3EZ}-%d*o=<o^r 0x C2&);0,2@gJopi*mꤷ@+1B5ܓ@kp>z֚Fu!nVa? {Jr26ga gUۈ0)R:^x#̌7 XDLe͸./Ą[r i~ ˷?L=x'.~~|:$"^.薟惼 ¸Qݲ+`}t6 ϻ-N;(pẂpχV-TJƽ(FN^ 8VAq.[!Au1wTD" }륨O}w/?>?9s-n\t]7?ZFZ mC1%%.n(r^(r^̔F$x 7{0r:$:5D9Bi_)Sj;D,],[=(B4/ӫޯWϿl}lҐ%7ѧ:q~퓓V<쥻q&6tt|:$NgLJa2y=UEqʬμcPʸpa,̔&Ű(5|}>[XH DF}& 0VMsoB֫ro" ^EB(WE:BKa) t0SՒ9nTqCy O0AXF%-NJ|ӌgHMؠmp.(+ὂO@_y1&qg4_.F\=7נ9Ms)ط)!ۚ;MÖ\[ Wt1*9˘֑8"`䉠Tps9jd¥y0˥K`Z_[j o`(ApˡT;oRl-Z+r@2 +JuQ.,qĨU؃)ZQ RVor-[Ȼ/>xP+7󽝀r:y:w-Hq\QK=(rN $ ʄGKm-`PIITJiΉH^;?SpׂcL54! U J=06TY+<Ji1g\f7%`b2`Bi ֯-) dރr Εb2$ !a9Fp0W`%Ԩgd yw$*.mf8l62NB=q4B@2ʹDHD3NLU)7{1ȉΗezd^kh~kCv0\)m&H[:wݣFFS,SO`d@skjAib8RY0Oy-'閨lI`5fc8HPc6`|"#VVv9UeNsK22$H4"_eRYpKYRĵYsKYPr kY@Q lVsKSX9Q"'uVݞ'];iVM=ly9.x;'Dౝ¢7? o'cNNW=V]]&-v@;ꌼ3ƞ]vUd*u•A6DY&jC&kXdrb)4Vo:YTѴ蠑Km-^ 5k. .c QÌhz(!DYr(ɥzGx|7D#Z R`+ƃ^~R<[ SdIMUV)S/N|i”02KG)#ũL7ʡXnlPcMQ˳n񪅦& ڟAU 5AZhItR\$nZ0xH0SC+s>(Da Cz Ql_xQ`(wZKjЭ'XP`L|yrN'ۚI}ةe|pBӓ)bMr@grߘҨ\9#w4>njv5hZ9D텝Dy4792wefV"-Z1o3/i\ ZE9 M( :IJ51zp$g0/ 2&)BSi3%,VhV: b+1c!(z1"@A"ceɘLdg,iPSE:l\ۀ<۩0D=78he#]~'sK8t>??ACb t>w1Io&@g:[o$q}4U YW, ?JJ)OࠟBN)Mf nj2˱@k؎k3Vvj~:'Qn(୬L ~ '~v656;hn]o-0TaYdY(%n3~}U XE$0HL/wyKN=OnJ>hmÉ9xl;왰)x` NH( GYUs~T%'s~6Jx}4gvf=%F]\3q;bf&l*)%~sX 0?orV8(+Հ oR" vfިK^ҦCJϻx2 nRyBb[A1=ՅMr1~FAS\`u0mk3X݉ ѷ5D?{3y68<2L4`:G.{οj滝Wwaqo .:[~ <وsjw\SεarpGn%*r?Jѽl\[o.O3t9كa{>9_q* ;Ñ˛mg`Z |/[00BH7xA͑W`i9qMexYZrJ5 50%n%e4*Q24l{h-ԶN;Ǭ2v^ [valu,cj'HNVHa3 Cyܚ )JIHc*©W' Ā@+С njep#2eB9<%6S@_nDAiflf<!ha 3񖃮(1ÙFfFZ &nQ/Gj9{]“cwV XK&ielL)J9&sIu& uԅC鲌iqD0Z/v&~ŷmiWv P~irNIxkrjFCȚuR ly79 yzgJG ?Y5\)D$$t!OZLgsμ {$A)g`(;9x 5tpSgշ;׏jhlLI/߹q;*NWiߊNX# ,1#;؁OqMv Q"@,ąeJ[ 6ĸu./[:eX#l)Cҙ%ߠ) h)m\+ V/NI`BtTK=N۹ y;W!o*\vֶbzMf&\yE kI -v<F\6 5Ӱ yP CwaO7 F!Zrn9v ÜL[mVW0v`*\7ZP֦e%v܁7Il%)(ft tjKᇟ*Д>lFj-9}SJKſ&O(Rm)t\ZXj٘ q*8%H }DѯYá!JqOϷnV$G!M`Lo՛[t<ߺ.tu|?zu iBJܷG˻C)!ϥwRD{ 6׌jt"n8+ VPKр'ETh@-9s4vcCZm=RenL]$;]YG*!{g&OfJ4*sc#0 N$ 4$F'/\9xmg<)43PF7˻i Kvʘ9PB}vA)ДΦnL?'l( رR(Q2z f.lڶ{ ruq q?oƠ1ϣaz)N)?O}P%Pi"]|,M9?\-v)O+A{8^A{d\U0qQ7vZcnjGsmS 4'3"#rwSB`ƾg w?^Y8*O.*:/``$puRkO4X:rP@YFbMiI*Nu„AQ05 sD[dJ묩"2+ꮐÚ W {7++JH@YӤHa]U DI';s ɣ/% x>W>?03'~~ѿV^xӿ.Ɨ"/6O >~.nEL^zLEL>[q⧛44;ͻB\ԌM:wO8o/_xx5Uλ/Û^C?U7'_\X, _F97K?re`0lԚAtoq8gsހR}D**LG޵$b۹:~`d0I8 $mOlcy-_5)KDM!a`8D]GwuMf,D8ow U?QJ¿*VqW VO{qS^ru:{=B39_/?|5,1ow,)\\S^dGT,:5n.?T6* T:B#DFk)iR1ZL@x9l`}vDƗƷ 𹤋xǣkka"jIHlc|?__>}{"/TN{zaD/Ǜ_ m}}i@(2.re\.އuT51\8^*щjWxuWUtWxuWkūٷ]XE Q4DHJAmIʁ=x:ƫj.<+/9D^4՘0 |L ^b@ g _K ]-5|e^(uz޿w{9IjZcE; k R|?c< s~8Bǧ_N;fD*|ZH[tEiJb0 C\3V*'8+Wr씪}Kh81'yf ѓ=ѓ $N?II$Ugwx63N'DB[_ ܸH;Pw6 ~F{b$Fnj@O2'".@h1p Dm}X`MU )G(~ހn -dM ~ovp]'<Ș "dFkRs˷: n)ŴmnC|-a鐙 Q2sA}$w߾pK2%Ctab u.Φͫ+Î&.5]]5j^Rp:hqJo:Gk/wc'h[. KR^`t.FpQ ca'Q M=X< }ڰQB H=NÁT͹#]܉EO߇YAwa`^*E`mD.;9W=\rmD9g=M!)x+6 ?g| xhz.J]Z9*-6W>Q~si̜^]tʄL,v|{ C;%E;e;S&L8NYR֗Ψy[!L^p;ZrtP \ k\[h(#;j\H1EYGh@@`;~]W Y0 hZ#&2@)Ihcۯ7cJ6z^~~O?OǙSI8$B@hJJR*1R5>=@n2A^kgsX%1h+302$&B'e;按7xAJ/:\2aq$KiyW67Wǿv4WcZ 1%trK b>jq"Nzu"S۷GW7gk?%\$?{q?[Qɔ$CMC$!c2@3.,8j M!&6F-HrLZOԇ_8 QUz?Ƃ*fXc,1Yc[Ϣβ5?b0Qh@ih3jEcA͏ZFX6n͏Sh6Q&F_2U@G_r%G_r_R?_9/9|O7_2M3*XDpډ4%qO̗\05d.< ?6K`OUHCE ,b G Z TU&2l#\ gDZ$;k4z(&+mO%5;eR% ip8]~or_EWOrcliA!~ En׵FMdԘƛN7cvrZѾ㾏+e/pME bلU)7dlSx٫m!BBjN/b{l$¹̀%$g)b|}8i6 O'-Ƈ<27w+5hAUz+tX)I1pJz;uy>d9qb c%U}xQFAD 艴$z8Q <z0m¦@j<}0Ff=l#$M~;GG}JEVf6`фWM Pjpx 3BFk 2C\ٕ5Oͬz!Uy*M)U mSUBdLDJ3`i )I OI%~lME116P:,9QŔPYPYᙖibL\'2) N&()5cqi}VT?HF ng' "AJ}cٹ@yje:9w()h_7Cg Ԡ"\DFTdҚPVX8j(ojf3SXu(¹Ef8A51ްEFҁ H̕vb߱-EJD&V 2uڈ6x7b"jS+ÒEBs_K7rhTNRb(x(ƨ}Kv̼=n&SA#A^&td9ˬ"BYm}Y*JTKKy!w\xJirQr~2gWy##GE)NMO@ LgCz^X amQ R12@ TՐ8ei^%ʏθ)(0T&h☰X##ZWkQ*IIH=c,@Չ3YÑi 1kf֌ЮYԩC_B[Rks7*XȪYXI6O+/7'9J_\% i(fiE[NuOeuZH>jk@4zctA%1]`O9Qm %vXHPqA4^Y -m w]3x! 20Q˄̹$V RcA}A&"v DrE"I" &;4: 4q)ge 0qI#.BL ɨ砶 Q2e8C9cCXp/\j)&a@`mp~j H>zpFl;p)z3Hv3`P}öUɄtۦ( j@;-L-4_\*^14S ӉՒG'e|GJptܷap3oQDHK")ws&#g\3ԯ};=Kk @ BY C;nj@]m{,D Kr_{;a^v"?LM~|tIݓ{ta/nf=;pJ`vSCwL";@jz4%O]MqޟS:ݫ'DWpXU}!Դ"Ƚꅚ-" Z_˭B3\!w[F/w^ӫyAǢWVl#m ZnK2d`s ?JV.o܀eg$/9΢?]Gh"E-Ԅٌ_I?\> ޽IQgөvRcZƀ `ANм+sqC;RkQ)Vj Eg[ @AH5#8ؘH(G42w*,^`T) gHn{\Gp)Xq!:t'{]lybn10w3ţW23c߅ERnحڮ< F+f(G%98xuL9W,v (#ZKZ@7\ @ℵTäHLYʚ*E2*qJI*SʙVJ YIv9=?}(mI|wsj|* T:B#DFk?V3̦tF+H|e`8N A oc䙝`+d%,zxgvb~*" iF+Rq=@D7TF%)0Tt*1`Z3J@kCryL($r? n9NsA>PnSI`_acϘ)$[7VP2ƻ%Akj H__sWӼ x2@߆)Pji 8/":("@Ґ{o*':[ s+Vd[Ֆf".sP-":U}8e4[ED0z@P&u-"AR0%DĄ,E$>՛qtPJlĠB'QlZ}eތvekcY^r2,?7hl/ nw<̯nB~,?ҍ,2?GĖ_>-?|HwGg7__M<Yc>`mA? x-4k\h g~vs >g,u1sxą`%o 񓨃9D :'v]\M=|B !O(xTſ"rR}Q?>BUZn+ -)hHJZ4kVPty*H#cK"$EuvÈ `#0/&tEߚ/R|a,1:jt!(b.W(N99e4\},ǠIR8Ѭh 9,fQ2E"Ѭh9Ϧ-mT oA Yz4KMTIygaXHorjFQIڢ܈-X߆O_N׍2ר;70O`XE$j蠤 :]sbߦOb$ATo#ɕjA"62b4wšґDY$OAA3f CQ+D3^/a,8Ν\o?~LKxzd-7k`$(Q`(hR0AVXp F aSC>؁HtDQwMpEZi|Ƒ`$ǃO ާ5nY#Pǩn֣}9FgK߮\? lÕS8C@+&Hr @2&$wRe12C`6'ʉg ΄,| /㥋$oQA-\jBw6&K m{qH(xMv1LNcH*%__X`)dJXsV%}&`Nh>zNwY˭L2'AܕC'gF)?RP1cj4WZ>w#}31_K%E@g 0E=KEoK|$MM*IwL *D-kJ;Q+KM+bsڀDg0 4)q2$ATs=M]ھ=CoKZ0O.U(ѓ\B0#SOH F*C=#h$Hȭ .a|bmϐ$I[Ex쒒aBx EsK_!O*`@BT\ %Z OhFQ9$">8mߘ2\=! ayj;v͇8C9UֿG՝wͻ>f|j""3 \yW?ޠ"}|]n0%ezJz^\;!o>WO@oUsw~~Sd "<2g9%F`~SIuq-Rkk2<}q8ß5>Vs'>'.0͖w={֬ñ-]ρm8V%V=g]>'M%k:'~Z\nmͼ7sY\3x9v5g,BLOe잟8О_} i\ohfʥU[J30j.Ǘ'4c?8DHGT}'%VjbV:ZyK2[=RK|;DOQKtѵh4N QFGRAJx;RD5OvK$FpU!6sp p.Fh7Qަܠչs9Y'%= +(w|Gao}IңZQz{l) p33%ɐ) dB\'2r1\m̽Ol㯾cɮ+,9ى2GjAftl( 3^ς։LVirR; xa77#l1nw_FɩBI$bȞr $A}$Zr`Wwhh)pz)ۊE>9wwFs%hXt`G4Tk+Ɗ'c%+a 3$¼:(B,F TM{rEF`Q/c.T2SOUHyq3uHzO~h뀳D7;JkK m+#LZw24+c>ΐNw~uvmk{]k۬BGL+ꕲZ*H-%0*0͝?25m-jv6k{Xj_t5Y4}Ā"Lb6Dſ=:ӝqϘu8˸WXo*~,7=~_Dq\uv]φegky >%y%G_GJA9w@ x봴 ~N8Q#;keŵK0(KD3(FB! `:ʹ߮N،x[~7vfw?N{Lg7{+'o&}MB{} w˜sjk?|pr>E;3]]ogF2~if!녅j"b-vW4 Y'={Ň(GB*Lu^51PvA tj1D.م2vH9v&T/Ux5QjG;59B;Jp<&l?M9ؿUR\j1t՛.ġ_5o |==?kt;Eچ$="(uf.k۫=w(}S`҆:xy=& )9陦k jIGb T$O,AN$G贅ȸS*x'g/ǵ'9xL)DZlW|f>X:o.pfefԞ/1cj)41MR!kQh ^d8ZBH9!"̝=@cu$ Je-}5K=] zV:~JτO;w|)zg]K @zC=c](N@>9׳RAt"6v]2*kon͑?.y,d?y`_{t\\~w|ev1].]EͮnjKYT^8+ ^Qß.j:ŀzg ~5hNL vK ӴIDlTcV]HEWSbBz咬Wm33c^߮Zu{.LT 4ߨktD2C4PՐڦe5D9{#k<]6:Ve *a#ٲnKDlٓB5ŲO _'Mm_,X͕0(gC^0 ne?1 y_#搋jΦY/FJm/>C^V~F|1;GLefʗ}EbyɅtCNwVn}z6ӜnzDFzg -D=dLT!~jP)_b%($QypkkegVl8aDG5MWK7ƚ_~KA`X^zW 1 ^75!+C5JB9#%k@P!XBqH9.C wL@HqyU1ɠ<-jWviW(Vv ~N^QNhUwa?%&N9 !hEa[B#E0ȿ)#g [6yRe{e^IcH.|ݥ3dSYGgT #Y%Q(eEZPkA%&::Jmw'6t1̀!`[!E8 N=DPo9ACE W;և&#~~jkyڟvcvsw_ׯVۻ7Nu[ƿC'rO_M^u+X?Rxz%G+O+ DŽo hX?DQ"B"SavB d0P )8\\ĩ*7j!zm)~[^I7~[ <ym<_m ]L đ %NYI 8խsμ6#h 4@C.A8F2XPUkx;AixN+0WX"7Ib )[Vor?Ef !Y0  c_$i(F[wwAdYc$7%NVG'xzҙDqr18x7i% $X}< F1 SBàq0INàzT'Q\3I+:,ň̝d/'2òJ+ǽ1BgvH7 ]މ6$*3~ yw{< БYj.p-D5Lr 8^R!A 9q ?цȠxHtm|?]`:@+e8oUG980Åi~}^?rT@ Eg?'E.b( \)M[ llKqCSP43HNi\:ʵ)M F3 "VAj B- 22ݥ^`sbOp(L#/ /B-g`5F0M ITnr3U0D,/"M̋HN5g(` Yo a8BB ! -6S9I8*VhĨWs$Q+wNcc ta/=+vcYN& hLIx;}(d| -!Fv~tD\{}{ X''m㩤Šiw7)%n^mwɂVq- &9Hm#H7FXa,JyD,.9og-ȩ8u֖'^NKZ"$ ޭ6޶z5ᄏhRXW7tES^߮![Xr dJ5O;h o5krESd"THsQy x# 2 $+\N_f6c6=@9MXE_AX`яi9=,ÐeIe_/#DXB\Q1ZFKj+J:.KnQgE'q ͕)VTR ۪9]ER Ԛ?O()BUYXTKҋ[qb UvWSił@H!mk[, q-ZqKqC92PB(I?GLCW]Z:+`߯>'䞚5,zvġ²1\q䲇x Ci/Xy #w?WL^8Q WLi[9~M-GK|`< ?:۷o]#uqm󫤛60 IUvf'Vvzvhr}1)=W{~猀nkX-ܛGfss߶p֋|f`)Dxc5)i)HA[Fc߮פֿ:),a=[7%Q`IQkn'!JFyжwDQ|:f%17dS#1LÖ,;9ep8;Kf]EM:%\( xn{IyY-JAV%4b"- L5B9@/eDrb飨N=( $6pϹǟƕd;_<*Sg-TX/3ei9sؕƾ?q%ṞOrDF*AN&%Fd۲\dcZ S2v/K |h~$#6OlG2ށg@8H@P D3]Ʉ'qͼ~ mʅ (ÞX% uuu^b,OtIV}OGx.U[޿Z].v f58F5f׮nޡ?(/9Q|L.{u7]zz&]46Qg􊎏+s'DE(?L}sx :'J<#Xʊt!넂J&0 :R3]K6WMJT L PMYBteȚ3d$d|v\߮Zu;DE ~y6--i7ӛ'bZ?{WƑ ;@ 7vByu { 0J(7fH4*˳I<M罓+]FƐvcDcx,#T3bZ$ |FQ-%Ty˺ZUFZsP7.˭ a~g*ʅ[_ U\VtIMutf}~KU'|pK&1gHw˵=uEY+?s?P0 z^~<ֲ,KCF # DHJhX3Jj&|U V 6!-0ڝ aTjer&Xv&f=.EZ&[7yLz`**rZִ.F'6oZץv b F6iXqHXDkFUF%紿kkE\LO$Tj| [Pzo-sAc|% A5;[2| ^4x3&ƭd{ @Cw.|hu;Z 4w}hYEV"#v.FNp-Em"G ⑖?5oS|s2$m뵾O);P?: JI<3sA?$NbK|QR zv7O X?x?zAL3 )+4,Z1r>rHI1@UE 18DFig-e*%eߒS g]] ,f f{TC|fO U)q3|j(Ak%yC4Y* ]5z Bq8e[~({5|c WqniRh%yWf+^G?% ~r{E]DkΟWzA CHnSMt3fQ2;%G$mć.:y{)cc MLzS_-~6i]E6jv㡲V;?x6 ⲁoʺؿGwƔ@#܌UhJ#쏔+4k )M"Q{) RڃQu33Lg,FTdU RB;d6='ec'6tarM i;M:^JP JZu qO|;e0 N񻤌(LىL4JwWmBsBof7XyFa}W,VIź&Ƃ|̣I<#=iW2+)t3.PiVޡ.PO ?R݌9Rn֌37B Ğ5]eݩ.GX9)%hf EŦ5ssTEB4CC]*5hܵѵl y5jTRM/T=ۻ\<0B,?j^;["Z^WUv $MT"ڍzӔ4^Q+J7Tù( DT4qŴQon^Tn;UBGv[jj {UGY]fpx6( ZAiw#YsJOΚ^j OUݢu\Cq\uvY5Ss%s6j}SdTmkQBF YZjr^#jLxjó=DRjV7jѭp7m}5ak"Ri7+JvnAh8;բ&B5kҕI5FDDH|>hW5l^=    +پWc%p(-b,qg%4Rh91N{K {UvR}%,\. /EzK 1\(SVEFJϨHie&֘rdQL"vۗb ՙвY:Zt PBej1ZbL:[  5h:p訑y"HζA4iթA,8*`{))OJ/ , 씴ioRQB\i *CHy_^$"|SlVVB!@[vuށr&~ʳHUĝrZd (EJQn&WC }{mv^t ylQhH LLxF).#X[eVuJ-eMn#ݩ[ϴ9ƃ )8D<Ŗ"֜X#,%G^P(z :bE w*Ȟ⡸s淮8g/ 5l ф IgYww!׀b*e) AjqvEDJRAYr~#bĂ1Z4`:s.hup߇1$S/3 7 ^̔u``&̲0&ݏ;ۇ᯹w'Sꙃ 9^S>7w|*k?u=fɻ=L Nz<&l¶"sy!,*d@@[H =]&gT)`l a|jե6!.ʴe%H8P 5!j4)iD(R1ey&RY3.DUh5;#Ě EL3ӯ:̡Vߎ&8b*3T[03OLkr3BU$cDWIiwSKs޷2``SѴ;#w8R"##b"e<_y=2AH[!C(t(RTm!8Ļʼb1R jUMuhܑ:'BcY7: ÞM_x]ڻKV++@7x3 ÇUXӃBN;48 e3a-AgOzuh|t?X]XAK~2Zp!<^#[wʐ2U$a'?Z$?" _ جmJZ}^$A*YDEGc+$]4@+=FH 1{uܚJMX8+XLF8"^WE6G) FZPUuvk/YHbY]g+{؍IS[Zq.IjQnWZI!8(j}%UR[LȳɮWC UEPI0 .nu]rgK < %*Ô?L[c8؉M;sdP5 `y7Of^/wM09<A=@fȸ{ݽtd~&Ѱ37U7Eo0Q }3?@E`a6U#-UQq`1&J`١fΓ:%Z}M?wv1ԍddOH8A__22$yw`*37٧?_ߎfKڛ?,zNr߃ }L;L2B?aY6"0"E_{*3|^*"fh%1c + `az3@,}d`Zd-~=wԢ4WRDErFRΈ{Tk^iMzQRߺif}뿦 \CC썛AgV Y}^pgۚYp-EK-n7KӦ")!<xJ LH#VS|9v*%YbҺ;7_>Ec驶KF{uY RPc'YK3ss|%IY~Dc24!hz1|V#䱓:]/͎u: Fc˷}b\/oIh^ Equl9|('YVΧJiO)knNɘZ'XR9AQ4h^yݍ7?Cy~3{|7-*^IX RՍL<;t`:o?g6+^`{,KysIcQ?;fYBe Cidv2Bi~n=AV4jS@BƼl#рjR\Is}$&7;tyŃJuH̏z9uѐ:~&옷ڋYŬbY]o\/Jh"06$1%#") L) 9LI# *2&Н+'z Xn> 0r|IF4[5m;)M])4;-"$(AEZ7d%p8LxF Kw 4cXb%O܏*@>+YlRVgzTiJӥ ?֌gzG_)jy9h̑vq袀ݫi\NPF=U/"#5jp C3C͛>ERQ򮬮zWV0z}cVT&EtGOuF.AYL j[h .2m 3p6*sS,CԒ>fy?* :˗zFPLl^WlewYÿ ,_Y7l]y^'맇}4N ;v|77pk rLno zy,r{˸d4m6пSRV E4Hj9nF[nfGT-Syg3X1eZ=NkGGGGF/Y&;#32xw"H a Q؂S"b ( ,'q؉'{а $Ey+BniJ5$7U%%f9#jNh}|*Y[/q(PMm:@͝f1 HvEM˶s|^PP~a`Ja`dnw3j<1+|N0P-n䗟z Isn;OޛsQlp%R=$$(rƵ*B$ø i<=rQ|*43w;r%;P(nG5@R1vdj7z d;:#/ S2 m wyPn5@jekgՅ+ o3iJ2e( 2FDvly9 ?tRҔ= zǍ!QIy(%Y3m o{7!-rJޯG1rosh8ZMS$V LZ)bsDh^ISdERFJZkզHL=!B<) N#lhY@oW:#HTֲqwb.cNߡZ`}4c%hVzS{ӤH6PE Br%n 0+h& Pr D/e#W%sqn< #0̋(cձ.jGQ#w=,{EP:â"r Xo4OPj^*a)qda-,E$7ܦ,q[-JO!,;Jاj}s ENR)$ sJ|R&p|nP5ϐ;̐N8! "% 2NAc$xskXjI2N$娞c&}ϥmRp4HSMPnr'SKKTRlH2wo҂$ xKq7Gq٧Ryȸ ş# ]輨m;Y]D^DIZ8G.iU=tuXIxWsƯ*ee13DS9{Q$8gc(ze4[ *j!mad0Y=7k7fPRf /R4EH)N%~G9zMV+=$a"dX4']N5c#H3 qw͋]kDace7jϻQ06qVӘ &&)_!ŦIPl1*&,cxQ!dDŽ>ԞWG6;tq7hַiHJ|}Ӓ3VG3 4wO6-g#6ZB; >ADu3P Y r|㦔8V)(‰>Y~<:C=(Mp5 >K[<͞NzD8b_Ͳ)tpIGxAI;2(#R!#_۬.E6Y6ͻCF_vؾJ^|U~꥚8}Y,kw4b?R(]tڒ:BAl躸MeI2Nr$Fl.[J0U;|:_>*/P5D1V2i]/!g-1dM#S#n"\]PVcQqa2Շ"qu{ qZdhvkf .{{#ѯd\IB"[7W!FHK|hmE,Դ6!Oe{MmcM Sj[]f|PwqVjФ9⁘)d8gt&0iB[1A(bܵ#`?ͻK޼ZP΂L3=7KZj%|1&t,(: '->{ (X}G~*=z}e@&G(0tm\^{ {wT,^ׯ*T~zeW)5J'%GԝH!|&Nj:@*FGi1 "P E#{oPsP->DqkJ\պrg-Ck#_e//K}U#!{=vYգ(TĂ6{)(c$(!w*gn+4{ +<Вi$;{k x!GCw.$[@̱!_NR j%"-R/CT[;v͕8gK[76H (^t{bx#-:=(plv£%8ss}W}[CdKPIxQ^9>i CU_|O=%XpHyױ׿pQ 45>_#e`CDS[g9m/qm;|0K`xw3A"TAtAIatKI02IT6X~]Cٟؐ{Jqc,Q",AtS.˗O$ZHI10T:즓MkPfM$u:$i;|D8#=gRX]z\Vgq6_]ZBWCIϧ/扵ͬ~b=w^z zNvO/)bR]8[["XO @Rh•"_>PI8211>j{3ăOH쳫HɃQ EԮ(הk1H9ՃBFӸ'h4|K% -){XO0NS`$UZ ƿd ZބB5:~Lʡ sˠ>-!7q|Р&)H:@}u@ Ft8O֤[ǿPJfoGI1<ڔUV~Q@o' i_|{lqxW‰Ѫ!~09g%LƲQ$06[s7 O|Jxxn_aC(B#P \z][A^4\%ՄHȩ1wgu2n?kZ] (=<{FrLi& T_{xL)*y,fey]wR1ILĘ6E;J 6vW]v'R4Fj۟_ѿ)cqOO_+#GQ~Z݄H8S?wdyky:#3iF`~3ޖpJ8|dbAn zn|>'>^VRQ6PJR(ꁛQje-m \*cUcLSNծ)Va'o+8.N ihDh>:9w ^`UTX"Ul{ۙ``D٣'T&ukkK@.s>['@9 i 2[8}tU*̇-FG -uv WqӦ0[=)O{{mP.=`w\z;2Cc݉xPj{IuҖ")mk}=`KoZ_9:,d}ǢA =AqviOȖڿk*"J,kЗ.0vey|ٻ.(b8tjP.?~ cY蝒Qo\̱zu7 Ğ{LZS!_@ EOTYKlLB\7$׸Dc<'nsY۽;9KO/mp>p¬b+o!w=^ bRkyul>ڑG S Zak; ءvK}ln7uCO\ŞDC{fDs(.& Zy3 -aK(cP~6N0_&@p~W5Mf?Oð O7H<$Zf1BcwSێ B&=doke@_ QT߽¡M_%z$`_O5O !=6J;D2d3DY;MgE/&xO+}g'ttyqs\[crY`iqޢ3ĸ\Ę9љ3oH?ݐD'Z_Z.gN# SՊRvy%c3)f|,tLU⣙\a*\RsO_.6~tJ\RE>6??H?~wY8B(axu{W]a*7Yˉ.%Ji⺜ou?,_FIqNLeŔ g! _A2855ϣb&1niCtWrUvQaY ,pܻ\rs2ןBw?FCkم;uNw>,H0bu#اGW( ˨%+.+TB]=,W ikda ]AoS덠 {Φ5 ̳gRf7i BdMz߽\=Zp#Y4~ ׽{;aipY?nl7^Yzc7yS4?#SIv.AmpnOM)ؔ j *--8Fꕧ@ᵫ-:рOgz;? Ӹa.ڇ]x|0,FF'ՙo܅kTk؆95fꜜͳl7oZVcZAK=Fдe 1@|FoP] ?#ҫTw@Tyq|3iy0_d4?0/½l0_^u\Ӻ?,ě|2Eu )<R2 ,_c.]m,Ž<{ùd74ŧo*c|N 1Zd3iƘM24ѩX{L0%ߥѣ嚶kI͢"}P&#M(Z4`7\Xd8 )W ˬ",E)˱IuβTH #,3$bO &/XO! srg,GFqG9**"oe>,?|Pwԙ֒kd;]u2Nw~20z/];ffʂOqjF*}4BpivQjcKa3HH}ڻ]Y)& ]6!IDbCr&>\XJba*w(Hjʄ. |Q]U܅Q pxti3)i z"#$>˰&MdmBP3qAAoT[r" T;`"݀*I=b4QgPxFГϏ8. CQs{!qReL㐺yS7g'P\l>vZ}G%C3qXWR=pA`_kVnzOF 2}n6_^\`"/;oL?q>!A㲔p-P;MGyܗ*B=JM^vG - V=꜊ˆC 2aPj)DGy [ 7?U骳@4X)dWsMKo^5͂KNZx)2 ?ܨ/jK;ό{%8Eeݛ_T\)T$ᰣ ߓnB$/;fWq]_ATXRFN,eR`!$'i{V yE>]F=0a6"9͇?Sj49Uj ܧfv㙢'x{Y3_ TKPn )nےYZd[JcoEj+VڛѦE3lT,3*Ն;kW(FuB"co2wu=S?+WKxDPz_BEp>,'^!伜 #^%}g_g{a -/Ʋf֛ Ջel/N xP$kSHԞ刪nHSyT} 1ִsrt0Bl4̚! )]˾^f^$J/^pk?:}كV0V5{{ QBCaRdCt$KCBԗy 1:iȵ Z!q*)5#Ǚ6Fz$ \H]a\H̴cL̨YFt1q2"%Zz{M0*Һ Z)x6 tR bG{3m.VTn[q|y@sBHR.QTZWҺTpK\d܁Bִ;WkXEbDQp+B%WrkZ:pzK.q|+VZaZ5j{?M?ivWW#Y xXN S/kɝ ҌL$LGx"{c/Yi,ˌaQi9RQ5|2 ^gDdB"zc/B)ܐao$dRAM^R׼a&P2:$}ʸ5q\)5\թV5SAcpXӘIs`մG+vhPBJiR͚ѠwwfT5Oўda)&V9S m Gz|=p56pɈeZ)9Xq9S_0}Fdeلaa}**3;$RiH#XgTj9(I tY|vq< #H8i-RFzsDъ0vP=*V[}:uUt#Y*yώ[DB.C* (k1Ϝdi8J&$2ڱuw<\@Ht ʭP(uIen'BqK5Jfgn9xnXh[Q/(MeʼXͦ0SHًlP V q&e5&6E{Ԁ}x?3J{9km1Vh 9{t0; ,?9 󌒅8@"DnGEg?ŧE Z̺=&wkvg"yFIq+﫞TAU(RJ6&fsل![#<=)B覛Z;;ZF,ZY-YG zzY/9@iR=qI @Ijx˪^9**jZV(tKOk] ^P56 jnNLndlоMFIbLLĘ_6qëlD<8Z16뒘W&t: +w% (ZLpkg~x;Y1GQbޢ(ZB(s f4n?xcx9{Ds2Qbb4Qguxe:v zPi!Y yMjȌ\BL^=ߘ|6kYYN'iR8UW֣bֵkK+ goa&&i%&9!($0n}W|\A\Igs+ sC07: )ZJxp ]KBLwW|A_]|tr;3D$f!z)L4ѭ[dQ-8iO"nn'bJuoãx޼<;cZJrfR~q"ϨJ+9A8JҨ@;@OiT $_G4)h6՘F)4fP C ShW S5>c.p{ӻ1\J)(P٢ {~lΛ/~uO{WWF|`C[TV^kiV׽r?*,F(ĸݯ,*߿ybmuD.`+V¦# 1cVSJ(9u nFҊ"$r8BBԱ^Ti5i45rSY.gY幔L)",HgJjR9*6~2sǶj:l4'ѓ(բ#ϚREq,Jűzf%S=/ ák2T W;  OgdgYfLagtS\gx`j: (JvԨ4+k5lUܲL,xs?#YB/9 }.zl` l^ j+ $"T)iċ=ÞP8x]4ÄLb҈uLx%pJMe%PP;zߴVI@/" ?*P'GSd4$̤&R S,P=%_hD;D,Pcc/8:BpPIZz̜y&i0h-V"7],P!QQIi3.j2ܩ]oi0M…cbd,I/b8BnfҐw)ښ(5"Lۛ1FLI@ J0=h}|_Dtd6kbq $p|uM2\i}kt~7hvâ%,oNg'[q31z̿N}2;@9_ef}nBgMU/a=9m#OBJϿ?*%;BOi}X0˟a/гR>yMAȑCS >_nw@Vʃ&)unoejj}[bBZ.r-TWס& kMҀ֭MS:G6`R2եۺԺu#h`ϘZZN+=/ٺ0zM.=nS/YFzAբ׶( X)b ̸Q+~D$J ԗ袦V @'@S@*'SHD W)QjA[ *A䬥45E)&itG;;jο$(fhSb/zӌڜ,zjwM{$L;]F<ȕ]n=EGۋ&;=1)|5޼7'nq>`~ (`oLO߿Gs秫EX{X{>/n.qz>c*ӛKʷEce!6n沮Q 55*Y0M_NqM[ P( eDKP^zgn7 LL[x׎l5wtW{V.!XOx5^;HV [iZ"R}`c^$Y6jWƾnG;dG Q(rE-F ~ e>N"ZG>~/ȜXuMmo'bwU?b: H`Dm+bF ~v@zQd0\2*}j.F$Y)QM 0DV xUta0HRwh~bB==F>gq. % De]宯l(pَ g2N\ pn5v W741C?i䐷E~^Yv9v2*_Q軇J_VY?5 TsJ&oɨƪKtvyku$w1_5i5i5i5iRפ~zq3֖Y΅aE_ b=1%mTc$L235?V"xK|;:ǪrbONJ'z;Z۲ײS\-;v'_A4D3*B|%+S~+YPWqiYht>9dshsS"_JՆ) #jOjfҞJjw7!hF).<-(Ԉu_ԫ 3" YVI3%xHDdRBm1L` e܃&\qbxeEs/T0pc6&9"&Wre+PtrI;E)kX" Ο%׵/|#62i'(b0߫^2PlU v'3*ŞNozvfd@j|`CL7WWwϵ~ߡh}[(A+S~ϵR[A%|6垗)8CL`e3,x" %\RQ;2 5**&|k$׎J}IڀG?^(.3pJ>2-hJr״ܘ'>kX!tj x{Im/<_ bFmFᙋӃ>H8=s 4džqal<б0DSilsL(J(]O,9Q׺j$-s<A8hH8ۯ"<3hkZ6FPc ò 8?48Ƙ+KBjkB*F]4ej;{Ԯ |H=QǩVl0}5n-){klX 9}gps>iRq+U\Պ }.^&Y&Y&Y&ugcEg֚Hh$ ɂJcP1*lpNTt.ttІP8jCC!i2LɽӞd=N{hOAFHRJ'pqB``|P" 0t"4N3fKC5'7&J( A\x%DfXh+;u «LA!/j)LV}3uc}_酁kgK}J 9!h3$%W.3ea e"؄ ~ƦST\dRމ5k\Xԙʄ I`QZ=NrJ+& s5UJa%uc͐}i{xIx5V2x<\ȢZ$dʤJCn'\<ɍ~[F$k}*]~zt~G+_>܏tO)rODz U#2uuo|}w)ߩG^JRC:]>\YSN)J&7KFpw}Ce6Fj4_NhE]-Ѓw(9Cyȇ{#Q#Q+/A,:oP-Shje4ٺ10SK_Xxbn;Ԫ95TF?5&ݞ4MV{ŴG> 2 B45*fe (x\ĭk!T -!񀊈Z~?qOI)Hu!CA\jCgõ1,)W rhX v(K۰#q@$m3%C%!eL#'KS„:X{CXghÂ1cDd#CoQ_~2TFi IzUN2spӥD~hS;1snߒّ֏)h#4#);%rTvT #ie:LpМ18;(@ȵ܍S Y).;R8F94Rž5} ;H* } a5G#WmKِͥLT bZp# ч1g@&<_grRgEo7ZپvDA.ȥa!XVA@ 1H*Om2xI7DqttkvӘiaqHK+ RI-9U?N;N?E{9HRB4`4=ﵣGӬ// 와\}0i!ז7byx_4ntR'7Ė8գ{gӞ^|v-pRdB?e<] N7af꿚sg 4||uM{iȾ s;2=p=&r4k!RKN_Q5C:rjSJ> 붜ZAe` mlia#L=ާZ ʭE1 XxETF2Y1δ0 ͽ" 7MgzzK[d۞9*15;T WjD[8e|CIumcëM 4LSqmo]44Bb*!,PP?Ī*=U/ +؝)ɴf&u5؛{&.&s6?՗3ի݁;zG c_}Kw|)֙>@B_^H/q̐+wy9iW8W_| 3pC`P~ IO}Nk#1Lэds :ѕna<up31UempuO{75L|ݛ}Yӕ~!ZWVvO]dBckfA:菉mzb] kWYO$*0x$X+ػXדhgt-WБYZy}s?N1^Ztiy"D:95Aߞ͇>K@l>"6<>I)5@nȁQJs4ҧڣ s,)h#G̺I;6,^Q A[ )iD)D" .#`kx)3ЃMGrHwwAq̊+k ܪ*U3供,<~ƹVt>1Eө<_ Əv}JE2c"urӖJ[WP+EfZ CEnMj$$zޥu묗ͥTP^\eQֱX3FyۊԚHjObI7ŪVz߶m[s]#K'qv!gw+x{S7Q2N /֓i= v,ɴE"ԇ)LBvZ0BUȖ2[O58ul* ZZ!Q]iPB:-p%'m+쟣^󐉕ɠ†4Dt'|1.ǸH"yǘH~?Uvcm)'D@#STFmJ`Q LA[y%~>8Gގǣ/WFtޛU?ӛq:@4-M?r|{f(^-1EOBɧ 1S罿1di _`\$m=<$s#9"'q_"),KN#4('h,BH 64a.z[x f3A;́D[J;U~[|˥hMFhv%Z;,e$E*ke$M\{iTVDB pL(]xW5 'Q$%͐TIAm>E$X$Y',L;$\ xx!h$gNHh<$ye[sV+L3ʠH niG)01 Kͽ{)(]ʋ a -żE*/(;cɳ5{fag߼{+|!nj -yV bv5(iU:ru " D_} D1t.7!@`/<,ݻzW)Ìs f2j|u$ N8FsUשԢä:rʫޑ`]wα{wސ{PL{ -1֩_b?v~Lj&c/#[2'#R1x!o}tK#5ɶڷZtCժцGOD?aRԪ oNHB|]#Ha~0y4GP#^!VRq'z%fYs)M hH Tc`o?4 IRsF Q`U >vVKﱭy'X6΢TBGEw-~G~ ~,R‰1KUht_ALhUpuU{)JzfJ^93Gn 9RA%;POIvAds{\X|9"Oz1Yi V NrDzܖDi# 8PNj"jg6γ1ָto_tw5k#"U6&'N~:pi@ X1Zlq4Z+fᐪw 􉻓xɯAZ@F"ۊ(B00TE7H. &rzGt|%߸A1]^uk wDy7s9r*uJc4e\MmֈYOp!*S+W}Kzԛ L{zv2DqFPCPP{ 2-bܩ0k cR{xoo2L9%(E>DEtj! IēG|F!#༑ sʜM),cT1DHE! e"0ӊPQr+īwmƓ5| ,]jWU 7_c#nCalt JPDr>›OrĎ/W`V/xGe!QF:(O?82&3O GOtZWCr *3Tź= RdcGbr)BZBS ŕx Kf(kmFQL4v&G1h;nNTʭ!isn<(L@OM>ˀ4dz~zrO?)?$>qZ~=q}*~ >fL L* GbR҂YO7)i).< V6Jvy]f5IDQVۻftNSpy$`ewޜ ͻk@P%JNGqdQ#N8b,qy 2ڎQUz0JTe}YL$:LZ]:lgR(GQdqQ#E0OW0LSܟ`*, sJ2sJ UpW Uu&2!3z.PϒЇEaF(R1z@ԡVg,0fV11ԝx*WY%I< Zbr{ ɖ L&DSjcУpg+B>I^F UYgy@Őf=.[4ޞ~mk\^9qx҉Ͱ|LÕԑh2-R-YKo&we|P}@56 LlvZFL4.ٔ3 &%(Jjz6<#UV$j []ϳ]gfqe2 c_AFޮì%>M\[wܬ| uyyc=`xoJƻ]ηZiފߡв>4ˠabۯ=OFFƸ5 Yb7-w{aUr;rmgbۅx}mWm{큻X~CqXcА"'O2j^7:my/gsS4Ĕ騖M^W |hA-PF6_eh4EhO7ap VZRw2vbJу4fjæ>xVjQ CA+qYpZDv!+!;,ϋKOǗ\.&wz,έ悏~ `J5X^"ِ# !-Qgi~^߳$<0s*XKqBV%wSGI=ߋJ'a}p!2w?#x`jQPY=9OߋX Lgv?sz܏oćڡ"RC4OvaA&w$jevѓ;wbC|u*G'`P҈}@+ c^&vH?4(}Lb]]Y>Kǣ:Zk yw|o\z`|1^?{G<c>HeWx7n>daѺEI{IC,%rn!i֝&j%ѝ2^"=*RTq.vٙz/^~{J }*a`j|}߸Τr`ȺSXLgRKn$љr*:u[t(I9Z8/w%*&d7tj^>j/1JJV$%*kŊ1G ݃F<8EWf:JPUNR,vq%]8.'XEg,&K vRr(A;cN"K'NHե;HQ?7Fh$~TK/[<"3sF\vl.ڰnGévVc9L}oeS9pm-j:\;JjPѱl}t<9K>Jpݩurк6KBH :T`'z?it ΉkQƈ:C-rj+tPAjQBSm5 Cϣ_m 6/UK6ǒBL/.lSBxLa]K#8M'D7^zl(A,Ҧ<#MB mIS7CsܖVmnj*RN]Wtm$^{lѴ ؗ!HGmi= J޴ 8&_Se8ohbyqpg|9'fM 5Ք>d-_2xoL~n/7slʻr]^"W2i0x /9G2b$͜Tq8w4D=ٻ.[:F@ϛ|cXeBgVh$,֙C:D{srO.5c=%Xy%κk{N#_#R eaý.O09}SVHDfvz,D c&7&CR9ƘE@$hDh/"1ӊfvG&c0weX;,W jW9BV^!P@LAqc Y6(R!d@L[N[wWXTb %8AiAg A@QF!Vf `VDAj%r;D&p:K 3Tx{fN-^7#UStE849~, ?\1Ɖ_&Ï^~WSa6sO#AGfʐH7eFį8][@"Hnljw-7KQ-CʅGaϔ֖r@H&A( : @2hĪ5FqX]zYY157ߏP2t!ZzZ)DI6N_5DvO_VٝjR*{Ԫ[ Te!<I1Vv$Á3ט ;jͪjD2Z.`7` F8`#r+n(T^ZEVSu&A H MY\XfODWD {2c1$ÀP "bGN|PDBĪ%ZG Ld#e MiD!X|ƽ)Χl#R@1V\aXN/Ex?HJ ,7IX-?OVE>/ p< f~]Mju1]/UZq~r1~SW~͵,yqq̯:t~W.\-?cwK.?hSVb@`|_GWX<{]??0&%wbW剏lgJ R q)݄BCna9n)'] mkݚ7΢Qšcr)ܨ$ǗcIs17) a13Ts17* 嘹H9cnX1sy1s Φ?]5)ǜrJBhrBPr)ܨ$S3k}昅R*SQIЧgYR9嘛IOլ,cN9f%嘥F"SQID/ǬpRMd|9fE9N9cnT$1+SOcnV1+%p1ssr)ܜ$AT ME|\L~noXɪ`i)b}5lyV~,~-&uN2IY7sLjɍ%C{4f/,Pmv'Q1p\:Dap$\ ærh!4*ЋfqY$gYm ) D0cD^Gv~_V(ڛžLd7pο!=%Z83PGjt,C+mY_ϳy'7GH͗'gZiNZ,C0+3b8jf%ĂdB:G!)(@ 8CHWo-ʢ M̀ aE BKQ̗1•U9# Y-3Vy1]~o^Ow@cN0 )'/ݮ^b*Gz&~\KSܛ& c2_`~Ȋv[nmV&/>F~c.d4$O_~Z.5W9l|s1;Bݿ1Փ?"g_N|wJ| y؅gޕUsm7|}U;`?m _,JjP <ۓa%(B7۫?`b_kq͑!0B˜1ՃDRV^v63#\_5jzxqdDawCn."OK`kش[~ښt|ys "qYrj .M 14SYWguPo3xr{dsvnUwڳTaKfIF2eXqQIEc0% XE _TP#kpK" w0M!]K]Qtc rEbemE4x&o}>uz71w ʛfxDv]xtdf]\Y2̲rbT(׻\m2;a^"̓+GbH0s>#&  r.`O4(M\>y؟ddL -tMZ3d-X&VŬzJ#?_^:yq_>[^fr6}QU235$/NKی)R");T=&& Iy#r,r# jafϜ0L*HiJFY - Lf=<~1%.-l0_E0ƽi:rfJl: Fco<[LsS1Sr<.BnT}̃?'?p'o'Gwd䮮`U7WM>}|gk HFv*ϯ_~WgvjR <;msL`|^c?d~>Wr{U(¡V0 }07Lkw?fL8nE0 %xXB=b俤h|Y)|l߽ <{L3\J oXhv pb݁o`ƒZ0B0a RnCV过bOEdK(^LqH҃"y'?/p~nQ,{Ǽbo^*.vC9\דWP_R^z2}w{ka3(+a5AoI@@*71QjbVu ;ZWv2[#ʩ M̺krVk!rl3lgSZ4d큢eDKa6^c!e4s\1r=YY+᝛< ,>x0 gsy硜iƝ%2)GHiY&5BY f$Bhhj1FrU(;5!\~ӷՖ&0x.= Te`w Hjgkh\n\w~kPNg6CYW Ȗ¬9NVzYjwq.y1Ŧϳ= DJKm2?7~6Z?opT/*:UB^3 m3.sRgLP.~F(+t*tjDQ/ R:ޥ< ['r͠dEW.ݠ^ҹ';L?`8P~\hm۠A#o9oN{"ȱ,uJa1=7Rb9 0!.s0UQ0rߠnzJΠ ͺ~*`H&-w~iejw"RE`a%Gh_pYH #CR%ZYL=4(F,:vig|DP&kc5x[[D kȐ ʅ+%iTba#<ŸL#l7)U $b_Kт>?)-CD25\Y#ii,Svrl)X>`@/f֖HI>x;>^e~Jk` Jŭ֝W~W™:czݝCxTa,"ǸHV"o?gZN}oZݤQP[oEtrq!܄@ ?>{o HHey`,$F^FŇ;&z9pGf(m-Ԅ=މowa9F9C܅8Yx(R9Z! Oj2PĘT"@NR˥LCuD^n4FAI_"BJ?$4ϔ d#IBD}$G*Ϧ}`>t5ਚ/d_#F’F1Ρi 7 {Ybԩnl7C5pТqe$a"0LBMo5 jz[Vr ©Lz:#l*s4RWH(MHȬ.-FJ22*#M]`vF}cW۷ ݾ$:8)=}w:xSj ZGG&)aXV_>" SZLQOfϕ1V,Kݓ8|o-.cmE投ls?>ğ34 %0'xT%Jp9Rg,S{L?{kP/N+ -, g@Nzsu/X0OK4aNPꋶWp?Z/ɥD$Je+%'tI,"(9%]F `DbO&NP۠A5ojޖU'h깰ZYg-Jxmj`4m*3 - YiI|nGu;?X%hki6bUL@l) Hǰ,ghkєr03;Bz,QF n빌XWk8>je*E !BhfQ eBp& o*M2Y#JjKoPPAjB]USRXKB(]R}E.AL*47cp(Ո)UHr8EtVA.V0׎ mMRFqD`kZzo9NM[ $ǩ(SVSe(#)'=jԵ`^ON pYDX+ r~L{o$t38n^UHXl)O + v^1$&\ B8.YUHtah@y]"aoo#SB|iC~j,IG9SG65gsLE uTAp.pwIh۝} q'U@=Nƹ:ݜQT.ai}mOT+!OBmp;W:_f;?ЌQ.(_:wzu*@⳧jN6<㇏ZIQݨ)0~pƨ9;J̿_}/1p%'^z)S/r*7'{eyUjroČpJ׽Ƴos?~ǡ+B, \ݬ{YX?ǒ&4~gdiI %j\ly6"VShO0JPz iut{8=zӺ[]ĨN>X㝻Rtf?diݚА/\EG<޴nTqDVW1S*xWRYFZ&4 WuJ`J7n _4s"ӋO<Ɠw? u}}u!\zuGy}MSgs%4I]fpyŞIdB,Uo NiJ0T8Gx{QAdmi9aQ@ $g\{,skPJݮMJP2=%%R92I -USlx1*"n˜g}S"0r :齔}.V6y0А/zs(FbS _oN LJCsR6 nk3Sm Q>c)2Mȓ,,̇rM\xU9F2k_!])lBRF 9S GHp鱫!9| [ )e.d-Pjp%گti,Dd;ç%xr;Ut`@XG(`n}b7pdO&6&;X%RjyLwQlmF' Rm>zn<~DžT*JT*fFYhUb5-c o∾>dIhq-ւx Y&[cg@T|--rg c14y?}pLdܾg0.w >S@/"IGǡ(Jt_ħ@&"S: &|sT ftjFҸ;,}JnwZC挴G{Vg Sަlꐧ&!"^c ݢǣ% *jbEqGGr(FoqpW##3f(ijR$"<:l{R ^km70.~mrqbEH_ӖX=/yEKcNi⋊£w}6| g(e8d>E?h΀+B}Lv4H\R؄ʅ|g!\ôb~3gi̿ռ½)fAp`P ?m"T@gx*MDx7~/úmjP(V*%q{3M<OG{Җl5ZlhU-!'qᛎB1 4: 4`#t@МRL l3"p8 "J@)-0#PMIDse%m%=To9ls_7:CQr/CPm^UpHI[ Җ⑍kN0@(knk߸:K [ۧGq@w!| 5ܠ^.ypO`֟~qM13;!9i!'6݆rNڼ;=VPčY/d4$;Iٙ2I&md9كؒԊbwKdb&*u,PQJ`h(*2T"^X^2}'?#-=;=EsBpѧlr4/oGO_rX<boo¸q_s@浪+xDOjyݓŷQ✟<%w@5TcVm\;̘KKP5ڂ<5dEɨɞ4wXRdlB&BeBo$E{aP>q}6%+Y$51 a)[O+ Վ5 +.n-'(ѠzOrT?oYVpN@>0ԀfqZPJ]-rdQ_0WsJLZ1ɾ_W@682y(*YayQ &^Jޡx!? Z{dڬ|4dɫH\u|6@ŏjoz(&JzpbbWl'&@0d)IBDB۩rmØ8jtHېMQ3pޜpiz*sAatc8F8?\6 )(h[70Fkp[3 cP|S1QOuKN[t KJ?^F(LԻGn2k zz,xYJ\B)4o1rcu@[RڠH4Z3 V{55V xfED@7 Z #78H+H c0>'$vNNv,O$(C50 VQ+lƤM{A;/CK%RfT=۟>}6{,8_h1S'3 }S irONi|tJQ7yx&qK G1U:,v|cN~GR 78#v|zC]۹7|oN`x%d3wo -8biYetPa#Ti|{:9'Ypѝ.Ѭ_s%a{3# .) üҕ%V[Ai paK *iİֶ.zgFʛ\61^on?CK%xS!Q_T:m(6ʂ)Z=M c֚`~| ;ۿ@V˾Wh%87-Cng;;WFBct'J;g&-'᱐AnF -iy1=N_]u,*oBQb+no jXw~͙㒗my(+7pkUlͣ7oZSo/uwm&vsEQy{Q+aƁ<,^ƿZmfXw ?Dg_*{qV3z}QwwӏW=@ZҡHEEA@9.YErtJV+NfN&xeV}<$\3<{ ba|Ec~6 _=_|q4MB(>F]G-b (0\fwU Np|ldQO6,{Ǐ%gg3'Vb8m^ $BxǷ@.I#}uzj4x*Sg"lzp%j:*FSqWW9!]c*Ӂb֥rMTT{6 FC鉥NrbRjpWxGI)EЖ JmQ@PbZg$cNs}덡DR &MO;IϑSh,1nk"|PԡpKPL0L% hVʠ)<~# k8nX h[z\oͼd>?_mIs5@( (w!c GX8E3%"H#8?܇qJ2.R(+&~Z5%e'yR`<^wjivϚ@R:S?K7ȴϟXLhR_ :p G&81kxat5_ k ~k@QpfPw7;vP&6Kl޼Rhؼzkb=C*.'(v%E5&^blChZSe&y"! oN@1i{amӘh@6tf&Ea82c(>H v@l !3\$d 9Fr뇗~xBMPqeT55R$/EU@شvsW͌m cS(;I&/{8t˶ww3d&&fIJZ)Rl܋9{yMԤ#YS'}Z קpNeo$גZp:\'vz'-tel .mU4-e6v\IGv.8co!g°|J k/KgbXoaϚg͌ 9Ƭ9(bT?Pn#M7%zm@}6C^8s'ɶtӠAb:h# M%Lgs$J.8䅳hOu@sK5Gebt0fv8+ nمRu8䅳ߜn iҺOzxL+6\,\==8Usj^k*N-%Ljd㽾-5JZvkB|.)Whs{u@S+6on W]p g,?V}[ZՕJ\ RX'm]2HZfV-`tC^8f Go1lI7CqHL X'@iRw]8Uu!/E)͈InY[ .|gk^ !/ZC :ٻ&m,WT2;;K0xjJ&M;V_VRqZԅ-P )u73S#Q }4.+צ1H0ٲgS)Ϧ9F?62S`X,k3-*z /˙-+KYJ }96]8grو&CYΘ0(3 L(9A#Ӹ1ZHY- Ff~Lq|1B Ġ\k1 r.Rc,̑XϷ!]@::VVO 0heΈ]@-q0/JplE̅Ē)ފSlCOqumA]>!O-($E+D{d Kl8 Z  \8 AEdX"( J\zs؆c)`"Vj:[i{![硿~mÐr VxՎSjRPD4f,'yNs@'E;3HcoAzc[ ѣgᓭu,1-XTDVf $R4Q9=ڄtt1{ÇYn8;!ǡ}챫<EWhp ;Hg4 .+d d*zFqq87/dN{K|x#i5`AgS&,]%:;BF=\@uFDfDJqaD kywpee{Wyw*>a? ۏNAtM^([ŭ^<"|lDFod:!R!p><܍*?3J%;a6ftғ=S8k=#dNѢ5Y2K\ɸڨSPJ `ka!r$`01WPύS^@zeb:f(j1#iya|)O8H .+)騇BI1ipΞ5zȇ{ UxXkMn#`'fɹHuQb1mzᒝ]0^W?{RrȿuHqYR1|&Y84f]gI_+"!!\DdJIbcj{ L4)Fq=V *=; P - ߎ!)ל nahpDVf/p#!_p]qtP:nu^ 'a%pVIu%Cl^ylF4{BjpS^JYB1*50EI0'˷bE*,ŨTPT!dBUF`1Isk)hVBc[_Z xt`* Z xҬըUW﮺k2 !{S0'BFkmN m7;K K xIx,<#mkYENnC Fjɦ(=Omu?vB e7WOYM -Ϳc` {W嘈R=M|y}ss "2Df/o`=cJNa=?8*Y_c͑ɩ/5h=!iec )t5[-4u9 F2ie c)r[]|+lDtiԀg%(7i<K3 jfO@a 9X aW%л1 0欁@R'Hi" ~L6%D$yarܰgjvKFh񼡣+EM8hu%0X,f Zcy<[m=D*;UMk)Ӎ(IE顜>*(VKʱef2lUzм,g3L".Y6CI2ch"PAռc}U.:cڮlkmT++e| (Z~6WYv=bvsFs ٯAnMu!{r)ܬe7>ԥP=pX;ǖ uD~& z'tX*]ທ3Bl[Q@㍻kjծm<N-UopSY&m2+U38/ NҽX+{yN#ֹ LZt"JH6O3nh@%y8%:rc ʍ2\*K1oTffYCb"yOZQ/ *k`lXAq&Qh1b;?].gx3xVHW†Wk~|+%kJY+U8=LNF5;߮=󱩡JmSܻ2QJ 4U<&`/!pkBFņ88^Q t0,hҝ;lcK_!,0ތ~|p~\s@f {eUIA߶^C5tw=2F 9(L3*ez"浧^Lc[z-۞~\-A|eqvV^g8 n!i%I~u2A|&bW";6KpǠH2B`Lq(Cp_aM/a+&1Ů'K _T]$%N]$%NwbW\[rayu.7PgZ)XgJj4 L,)lt%J}y||Pz17 D](;U"c vUf`J?[2L2T1]o&a5 ItM& Y} 9V a@ 1z X u nx&K}Q&K}b xcJg,i/1f,4F,0|$绕$: o}f uAѳ̡aIHuZh/Eej/)`%%]c#ٻ6$W}9p S ۽ /~Ȓ؇W=E=HrDR=]OUWWUWWa"]#`,Od1M2LA8mAg={ fƶB~{.~5.)1F6~dDEc %ߛbg-.ѐ6M'4JϠs`=fb/, juP~@ `9I7bBMk-$]4_'jOWJE]606NIw; p %ߨP?q[VFc5z* 鄪n8+L& gm& UEVO{.Xb 1^h4TE[2cΧx6PU U|t`8;߱ڪ 299Yѳ"9'A&`f=9|bA 3)@B8Wr$e6Icb%sdȞ 12{aF;xh~Jo1;(/(u]Tf3@Mlu8-r%WOSND\cVx8y |eLYO?|f~GB=\oDpȲ A3 (IUXTҁ ̴f& 88qb+~֞+`h&wGZF" H~"n{bʥw%N5KW0niAKYQSRvILm>͕E3{ѨqCS;/<㎂܅# =v8b2map\eRHckŰ0U67/20*.SYmhAe hel8X %DD.Ƅ*F'is34:Aۧ?;ۧ7tu,prVse 0dGP"(Z3\.!L SQ9iHr d&3ۖ#bgmc;R98,:rNfo Z`i,ڇYULQzg23=CB+~iEI%E7IJ,ÁAh{b֊?#9 )H@+"Vƨ2f0jɡ╚K%kkeiM`>/`:p"KM;%CPGRyPAr1>|K:t@&mIowɝ/}: aLx:]:@al[jT}^V"i#l_|_NlΦqoD+utp2@#o=;{}z?qh7eAo ɞgTXe˪yVm/I _xt%ph7 mEeZ8Rzht.y[T&e }dNͽ#nH4Wʢ|H/Fհkl;"…T*{KKv3lʙriHFNYoWԐ<z'¾#r%*}NDd\s[L 1e@C; 犉cJr ȌN5fJ͢S6D!h,CNǔ͘klwin KgoeFdTM'?=n1rۄ0  ݣ҇C@W Fͼ@$TvL~BFO~mZ`Uɏ7O7o@?NV0p7>-;v5e-G*5\ggVYS@$$L̚s89&L,2nJtd[8==ԯAχ,F*֓gcZEqhJ2@}JO{./Mǹ(ߓw5#?i1zܽ}st V`n8|Ybe~Mr YM4='>GrH2,=RG-a#2b}OHJfkAR**TW۳ּ5˷eQr(I^]MOֹZϧ|} +[X!d > ,@R|ULء[zjۦj/6*2Я2]7/SD?\u2Y,{7M}#[N]v6qfZuo'K +W)G?.S 0ovf9J՚-Jw&ء Xv?m-r;3 w e 7ځIl*մn=`{Ld4 {ȞHn2eiq'$7_0eP/8[EW&ʆyre|z\'qzy^}t?[?P9[?R(A5:С829?K $wD]1KQN5Ar{.v"00ǽ"{X?Je/U+n2"ǘ>}Ǣ󘅞? 2N&hs7x{c >Rcn[tHmk9YwzhrX* ML,T y=5p+Z36[xzps`nj`$pqh.ZW&2&o.aR:< W\ d"kջXV׍QB)VR'cާl'Y"}6;VNK 8r;XזD3UKavO2}d r\eT+;\Lvǐ !-yF,F"W JXW>7%vs}U05soV˴ -M 2<;.9yf[ 2Y6 y9-ѯN2']]f>7tryY>!-mx\^:;WF[saXJǹ;?YhR/Ѻ.ō\承^Y 6B?5V-o-;[df 1Obvʱ3 &JӦ3$^mDw|*M95wy Qx:~N65v8&,$P:ui:a숍#p"g0lM=7yh蒧R (0h &ԟ{` @Y)8:{b*烴BE:d>,ʴ͢;<L1~jC>f 6$yId!TIpكp29@rsw1Fe '^Ĉzz? z6 vtzx6|WgwxF j?{W] t"yIv, A&<-CВ-`eu$Y:&۲1_$5YXWV$DZ6$ÞX)d'{7d=ٻʮl9ȟ2qSzhRH<:p5v ;@5auwTɇy7ZeWs3BҘDt ҭ؇PCP.8?ϵyEr*9Is. z 7Y# UZ7ϫ,0pe 4LmiF~q/'gQPӏ9ןVz5]Ur % z;} 3U\` B 'u"iYU(ɻ9)i 孏HX\lg!4fЦ-^@PG9§ڄbԘrSKAXOo/GFhCJH`5-9*I'Ajx߮ :! ԴH ^ң%T4DX"8(}hnn.ab~R){ʯ% @F7Lqp$,3 /5vN)ilq@XKy8ڞ8)kC9K0uJEI5vOKw@kW>dan,-8HlyRp/=JX$F;YOEW38И0B0 g9#3:^Rk1;`=FԲwҚLG}޷oV=lH}ƨּs|pkFt@Q83Nts1s(R\OusAvzz\Eɬuyn.L!nmluCAiġơP~`? HJȏҵ*֧M{' N.QqE=6āhJ+~:,rAK#m'ji1@*a9rUc)MM ! ]nu`+3M^C(v1̝]A$Wmֹ*Dj!"8TV!=S6hIs Ʋ'*. qw<XJS_n{6GH>|<v{7ޡ7kwúfyj7+nnBAy_!omn0|̐Lwmqfv}#Utу֞D{͉p4ޞʾNseׇ[?ߏ__ixq*ˎ=霩Lwc4'P!k2Y]Y6W֤%C%ΉcIM'N[QBJfAF H6}d$o0}jAl$ۨoM+жvn ^Zc<e35>JQ'[?iךg3ZplFTD?bF1 E$6%v; -pG2LqezZn9aj+g?mKy ,C$Cja"otͧShSҎO3TX3?9 s ؚ0E;Ue7Ωt&} #y4j=-@J@ ))qp=j=hQ r.75_CCv>TYC^%\V<\Fxw:iS*RP #ijݶKxfӹ~{="c5,lFH9nVIk͞Xc C!v1umޅ6$4N:tH,H=WIY0`>)6Aؽ^$P6*i-)-a $ $YP:ͺX-[-}yS<>ÜC%7:˹7-@>kIw96C'if=KbӳeQAZ4jv|7]2X¾Ez+M-=qI5vORenAwmKセ>X%5@{Eg/OF༑dO5-R*q˚[֒M;GQu Xuu\*)%Q&:7&c>0b f A`GxI`;~\Wv>]m ~Wֿ?V6=wN֧A~sY S\><7֕mV_Nj5Ni]&*W۬6/?DS%6] FGfdHp51u;4@SL[VBBݝpgd3-Q6;/\5; KX)EƧuջ#kj+sx}wv81䖉F<&1ǵaAjcn׸yxWsa nilg'K|\|>#;~{5bQj]cefJ(z:5vNlGvo]RbdfiFҍQs1P^s` [d Knc;O(搧yi_x1fF3# ]!d-YFf+ڰ-=}6Ӱ׆F[ 'gf$}O,}lkqIp.Szs>^&nmIS:_¬1 O2{ߝ|ie ʼ;+X3|>2z1g3:ˤ8!ɂ<2XcaRk!j`>qqMi X|p1QjWok0~Y{cZlшICK6nt6n&- I 7AFSΜ+$Ͼާkʽ|}A1'f~CJ<|@^ {=1A1L YރB4HƋ)?0k9/ݶl%͟C4.fʾ͔(7%^(_Vc'^ 3\0mBl[)hYrFz 9Xks|#C6#5sLH/ѳ J宆-j[B(fwFʝږ=D]5g!?[QiV@_=Zb`L9{gf޹{!jepvaѺ?BaN!^bFV1#IU/%@Ak%vYS:^mryR mzj*\+";O42"~ly[ŏŞ,%;? %&cuwkdl,:@)Y2W]$kĎ7:,*Effh01j'''a_on[3bZWo^g͔yĔ\mHտ2}ܼ_r5ۆ`c}ܡ [4P\?4$؋kgMA#}$%x?_I`Fz-3 ~r\(Ld4C~jYdU>wC}p+q^0j A ңCW7*>GSbrԊRKj3Zb^ǝ%wBRr9ɧdĔo%Z1ܐme:' Fiq B"QnKnAIR>_::]ܖ8 {5d9jrԐ!F 8۬ӕdW8݉1'x!7E|"̒iQ} 9aRfRHm`:L*zI%#I x;g˽AzTn9C)CDŽ3zL:)GwSšS(d >LB=R2%:~{@}hy d[1iutǍU2ǖJ.aBZMto"3޴ɑ{\׹ ͒׹OrnPC~>nNJpLnk86O.$amKqD2oPBhd+Iiɼo]b 6 I]y5H oJ’E `Oi<^QxA XJtL`V0wG8tP%RAqSx}t6c,5h8jN;<3*XD%O 2Q)7Suh= R\;6TI;6^YE gY7r'YvLwt$!0T~uS3_~l@,[(Ngl#hXҁ˯V35ɟ\`{V{_M<4)q?m:F~k[[wZ;{[k]}:ߙ\ZZ?hrM4aACooZ|aa3/:.ys2׍ýk迱\tv6NipIaE@#v\3So ftM4ƴei Ts̚Nz9 _I }0~׽h硑~`F'?0'Hv"| JW4m~|&>E?;v> Ҵj\Mɠ92ڷ?QöK-_1_ռDx#wLwyO$or998J[Fy?pz8'm'%0#$9v]nŃN^'w!>s;?|oYiBw}*!MݝQ0 ]8pƜBxva؁w>ƒ :u} ~gppƦtu@kLwlq%D\=G=y63u/;7 fyÈѕ0[URJNm/1m3U[!a,G!,oLԂC L ,EJYB6((2aޞ+5w0oP%X :. ZⒹ?~L+Dׂ_N1ӊt*SlٗA ;iRGA\zHGmRcg$RcRh/16T%f+9刲U(c`l}O+SEvhN ڳ Н@znaF$)Rc{)ňDq^ 0[Drʓ{32[=huxiru~[99\L΂_<ת׊,x^\:]W|CvKKBb =*/~)Zw޲ކ0)v]>&E L=!"C8C zz(2U$>y3Ew/[E0f! n0QhR1CU~W{I>牮_կWzb/`/_[Ig&H s/Q5'boy.2^)^V)+:}V쳒/*;.Bz8|R"C{VBu*xD`꙰Ǯpԓ'g˭pHr `)*$L{Z eom*.+L<E`7+?i^  SMVRB#d>B#djE*&ѕ\,Ui^,q"qF2$ˈWi  Ze8"0f*Q 8r&5&_Fo[S:[3/i,R,fRߊR*WT+%PpɴqWRLM 3͘6ep7\À9T9C;Xu5KGJI;G `dVp|XaW.,9'BEѻT[Z#K^VRX6켴hv1n QqV cO $UʝimmLvW)᳀QXipx0݋Dcة B`INiGE)]bi$9+eQI^ҭW(I-lVwZ]צ)/PӤdtJFgdtJFg% ͡EX((ohs-pA!%y"p .ׄi=M"()09@$%(_B<}Դp;eqe06ifn979ՒW9kg9{# (.8xK}ep=2SM=kњq=AƵ ST֧ZW}O‡K ˷@ 3X>i؅N/ph͹ 5/3;ċw_bZL{X}{[Zu7)]ʷ`YI#yk+[YTaiM7MEښĺ"}\0Iuř7(i36'gLV֗g*.nD0mD_biq*Tqk\Ǚ>l]@'n'WQrOˍ]_Bi{Rב|eSr#sI$nO9J [X3LY|'(hFaB='R38));6(a$*Iy`VTIYܠ ;)zqNJ0`EnD)fmD_}Uqjך@1)k?(zr/֕] FAֻڕRF?_ɹ|NR"nY&Y}*bF!JJ?uMݑM)ce]Ld8f(2f̴,&CRHRb"Oa,kBTx P^JA R*z-Rҽ5h32'ݼԿ677/ ύ·7W$e9AH[r3:T0{앤(A7Ki8R1ԔWR^,/] Sd$p'A ({W,sHAEW5?ƔvNi{kȄxh`88%iÚi&*gi"F+V4]WDw+К.凱IoxS_zI'4L2Qb elR ( A HPQQ )WXQUiq굦2N)DD0q p8hXl0Z : ҘRi+Ma#$I0wRj/e4O"iRD[Vg %bdPTz*ACH@-FAkb)jHtU4NE]Kq*ynp j 1FdEK:[F_NsbI\H0F+ s0\n6葵x񚜜Gσ#YJ"4N~<2Y&OO")HOrTmVz3Bs ѱ:fqH˲66z& Iy9[GZYY`}ؿғHlB*HrI@_NfiX{Ԙ v=aD\>G/1qz&*uj6g{ՆedhzatvKmxVg,hU)NQ)c>hhhBP6푷]C;AJ )3ԋdŒ#3y8!޾3]_wKrm9Gs6vDZ|>$yINR:8F^{.( [;7w1TMd&s(:;,[eo oگ22vI x78 x\XPy"@`!rTGCeX0虑*ぷ]DEcp|Q6)&uABơ"2j+r8n!yY  P1Rlpҹ|m3+^Gh@9<$JuH2a M r헕̓m:!,]I_"^_|e·o;<3d/bf,{ʠt/FrN0vعjo 7ח̠[EOK}l|1:](I\ܔuuY"*3/J;.[iqy/ǐ7G_u0YʃJrYU9Yd'jrjb|Q \ۮg2#ky9erד{q3 V*sAÍEt`ұ&AYv::԰ejIBPB[yS>c/ Z{P =:6Cv%߬8dܞ5ԭOS#[ZT+ ,Rʒ#Fqd󢴐Lp׀mܵ1(w}6(6]Z` |s^LY/2W@c2a[|VtspWvuZ\^ݾ=#c'noN߼헊)@r3ON<[ƫOʊ/R}ݰ֮jqeQL^PR]KZ[_vȼ=y`&_bKi;퉉n:5& k:ɓ$E9=|bnX9C4[=e`4mhnn_)&lHt6qPB]X["(E,U4S[nYU)j*_6IP~jL4MpA#Qu `E0G \V'ɡ1f2llNP9n -Y,Qkc/R[םVFFG:c'ͦ1J@U[P"0H t%7I>xas`t%BeXuUTX҄뇔|`7/6rz8Eg*q, QU5ҞM1莢@ U@6aoYz+[<=-W5ațgU)@Ozfh%a";3"5ZwT1 ُG A?=ʛKjҵ:#P4M v;{a,jmێgʀG'uj)?yni|6E[a5;Lye f&D!FeEر*"\";@ $sQvn3vP) AhBO҂l&3(Vs 3HC֐4ƎOG.Lt}ڴLau?ʬn6I]U=gw7lB-~|+;m-HZB݋JkJT\xE*!D^luR%+N ӌ:+#W΋6'^\+.ɨUhD74蠢S5AKRcN'D(պ]5g^0#j T69ܦH\>4g?~>EFQQ*51:'F0KqʔIu0 KĦV1(MN_mռMqhg`uHhά2HHZ_{%ϳ IK-jV{ K4j40 wt-feF;rMV:-oPq3Zն'UZ:܏ܗķ-u0C[/'KH E%/|.ezB.;B±w.x&ZKv,c_eXպI *M&QOL60|Wtlbz&{23# FUХ߼3xL9AE󦕕@ȁ\)Ա(ghP#͕Y2CV@-wH?q`\Ys/lqN)֞Y:c1=d܏*8 qN=PW#3~6!Mc/{FuֱMONm#:;Y0;tEX3}EL} _&X_g( 1? T-Z ˛UL-k:K{>į}'~;km׫˂uƹJ%P5ZZ@r;j+45LYzjBPVL_.f6vxݻ_xax7֋Ծm߹6^iuaG|8UsF&v9ƳFlb49(`{ۣ=;Rح}&e%u &A !A/w. .,eA}]H5/~ -KFrOhR<;[qa;MՉNtF[XqZ{+@sRlߤC(d"UTp株+WzQ\MPPQ}R묷je:jAA@tu 2K2eψ,EZ'd&o ndlwR_d #~Nkis RKWd8f˙XϔdIo.LZ]2e!}XGi<5?D@cHBZڞwi rZg%X\6kϥ+`Pˈ׫]uB(%2&u֨tetMWdc2hѧ_ri -}޳g VggmFl+R#%i\6l;CR.޳eiOɃ[> 5ߵeGtKSV}:Vn$WM$h6k|H?n1,$gM/N7߬E6>/Gw{0JwM."qf؇)Cʆ *472PV4dC9?5ӏ̆u^L۟q[ct"ɕUYܑ )LSUMMQyWjK׃jP9 q:F=̛{3sQ98hy]nA{Zk$oyM}qs^XDB&Ve>嵏X\:ɛ>_ٳ ?fV)ܯ[-"x};jچd$E0rYcV>\T (oJʳucD;^ 4!:H 4}M=7KZSW ,?~4_ j[Ue uxRP߰S[W[ *_DWT䭯tE/0r2Ǜ*Cs'>!&QB]M)Y2hjc]Şwti[#oȩϫ^knS6*N.ud֟:/mw GD- ~M{Qyݛoǧw(a s-"T|?%of'/;X&$!`RAYy/GgGh3Pn̚FG羚4gƕb{)P=-sv[zP~8C6 mlY&?C 7b#DٟGNre'ڣ8 y'ܨc%nx:LUDz8%KuɨOF~2SVfcP!b,rFCH~ P!N=c|c|pQM̩* x#/b޵ b; B2T4:{6 MhX #,ʇ "UQcC_/Xl"aɞ`a@Sfޫo_ &AEھ`1l8 #nW`lbT P>%<*c۴}(/mQHyyWOVBTxJ4{j VG}˪:d/*h]6ۥ0[)-cfPf3Mr;Mo) mJ xi N#dأ_'0 [aD'A%Ly#ӻq @'ڢ:ᢜ΃^8=-D#*<)"Cb,>vAч6]ԇBBxQn%YY}Vg5GP.GYTMS$1qN=M#HcH-DhUoRm;og0IjQ3LH:jdbK<[nQ$H+|QhJ1Y>P5+=mM?QHlY]ǑʜC]>JQP1NpZ׫/1%\LGĵG7WC=='zA:kMı4#OF"PǁY7}om 6c4*q&34G\ݿRz!9,hrD6Šj'k:! $Dݏ@(l,h0I+&C;RqmU&&KL9U`K4faTӸ֩)# F*t!Ypgւ}G2twAǫ&]8952A3lK0xHhH@:;g)DrTĶMPg9G2qؗ[0f9_-t0MڪIghe 됝o$$ѱE.J)OFo\A-R"[Wu gY-pFH(&sQ-2P ymvS*%jhjbDKʬ]Q;2 0[ ;k6.> ޑQ[, -٩1GAtf)Ĺ{`e!n:SL=3RUzPl8É1Fzg},Ƃ(@MJ#L/gmf&`vwNe^#Od@ɒC"X|- nQր%Zeˢb(Y]m7(Q*7X? yZ_17nY԰DYG˜n9ʹt8~Ք9B@*.~Aᖻ<{8+@өrK `g ;.)`w%t~Cټj$pYF0ST(YQ=cOR !{?VFKI8[5Izw͟V}KĖ^GPz'*?䟄'ːSe ڢzRȜ&0#p Y#ġKHCNC HP0C?׳Nlq=nr^35Tq1w{!2G.W>gn>0%dEFn S+Hkqi۳,!1kL\-"@ ]gwM>dJ06΋&$0Xڀ+id(6lC6D8Z:*Qwd-wz@ulC%AD%)ZŅf)nw^խ! g1"4JjՊ@~je![K @r۷ TzS>r׭pΈ൉NoKY ܛNW8 u6kNtkR`QPv9:\Uؒ_|:V4j7V,# P{B `5;EЇWQ}&GYwD @(DXВ$mT(q{P˕EsZE! vnಏ9##4aif Ob,"gњR.q>.jZpWv g sP-uM㡎#,uzTs *r茘V韖bGgl-ܝ"aQ֧,: $h&([k{sOn#ⲇn7m C6gg\,<(%Y?޽}?˃з@E؟_10FX=2-}~wY 䙃v0Ir\zҏ($̇^N.E;gn/5$:)IAHcèc?}$hnfĬ#`Ü9?r# Y_\0(BlqWaӜ a8EQpX3JxǟW#d2ʆWE! [n&tx&2cx=H O (()GުUXmSYac-e YCG& xRC<ZȑQecӋ|ѵlgT;JB3J~@V6~9kV`c#>* Zq+uhj sxjceB^BU\81J mc7E (k%9ǚd, ׎jqBn^J9}4O0thD/R`̑qqB\s?:v!APgǧwetҠ.) $9gt{uaOlnğlw;<;H?MƄ8`L݌N_o2M EӜ7ڣn1MOv.˞ ;'.MֲF@_g~{{iQmlB(įn㲈8ۻ<)uun$)Ԅx &4i>Ǧϣ`f$qN\".ӹ$CF7>p&BL#qz 3P:P-! F(µ}/`9,㐖y@iZ  j c=sا*pi$" est/& pԇ5[sV"92S:Enܢpf_5JM7*m />#TezzAFrdAGQбT3܀]ߡXs9`.V)UJ˥-gLUşE3$BZVxC %cuVג31 qWZlO@- tL]-E/ufaK:St k>X ~nͯ\-Џ/G$yLy} n(^d3ˬ O4%2$]n!~xJOlꃴ38rsn`&[m L~I ̇: {Fӫ-h݌FoZȰatj'WYOffסy1{4R罗$T}8>\摹mo}g柟}񿧏4C=mƑZM\>,IfӉ fǐ""kjze ؟h3'L1+kCd2iUN.~tF?/syO/-K"nŻ7\O[x~O}Sf~G{\5fЬBJG}\wy{f8Sc=woa[=x̗T_zzܐy{}ҷQ?m2Of7<=9y:< g킶;p'j/NJ 2v_%k46N*]7TJ|r/+V]jiwYh6ig,vi`*Tky $ɔ֎T^ۭU8l? ӥYN" PlQ6Z}{v[/ W!8џvrMu}38s|Z+{ɴNF.Yt9C \c]LsTq xչBn2ȴ<(qE+MRnT"{#\~z|_ntFTvTێ vJolp zeשSO .HO \HGm]!ZTi)ciCaw%ָ+=$s \PӪz+sd+;<Ԧ&_mzbΛ;)EN74Q"%UV;݉wOa W+V0a K 6(IVh/u$6- 68b'P+ulǗcF7Xa)VgL3%Q3%Q3%Q3%UTGlPUtRhkYHQUCkh`26i/M)Ƥ*0FU'u8״jARO8xL FԞ oBjō) 6IMjd6~c!h弇&M_ 8=\-TSH(+LqOW|H!6|T"2h,uG,rIP~ Myǘ$*sR5wYw 5` RS|Bï"|_V ;0hۂ'e(1["ZZoz_\;WˣZ;͘$1\:q65||O v%- <_8q7\u\r8poپ (@.m0kWrQzwcQ8Ls**]@_ 6P,[2u"j s @BoI{PE|f?0I,2};SDFr~UkB.@S ^k ^7_S)u1"va{i1Rht& }m`)ݖ}EdfOU>AQA\>q"xuآ.M .85< FZrКx= ^%߸KIPO.S\LRA1Yϛ5tߑ +w",}R6Rjl3?^U_ ?z|e_zAg  IVbW%_2$3!ڶ[P(,t[LªE<6oْ@iH@TbR')( c_]_~5[M'[[U7g*+8dg\iD]}[R?}.b,h3JbHu;05MUw?TaW4YNj Iz<ߛLKv t%7'c@dDzJDP 4TMUʹؙDjJDZ-dgZ**ZNj9SmTZ ߞ]pZ,h$Ś ,3D[g--H@Pw-֖8t-j}^L7 F|{kyuWtS\ljk'z ۏ_xEWA^9P&P8}~&XMv=DrB!]+LdVZÝ ׫"]y!Kqe}spc ګ,qי(h_Uep!y>bDGK[XRE'+J؎f]H5#\G9$U걌'"o= ۸KK0w'6 c[/#sZΏ,Z8Gqj)ciZ}[6^|%NNT7$ b EsJqY& *&2Ij4 ʸKS 4V;=O0Q*sӴv G#]?2׸b^>FrfMes:tEk{Knl(9eer?/߅N%rd4II9?QO🱗{._Ig t."?:,ΑV |*'܊y:ɂA%bVZɭ˖Zb ND8%;UEBd{kr+z؍ͿLe4=n9M̏|88pYȼKL{ zk0ҕr}?/d8{Gw \?OҬ$5ݧãQ?mM$'q}ypɌ\\2.^)$^^GܩJ~ xr wZ|^G4E_{ys\:%H?8$g1 XkMF0ZF̏>~ | ۻ`yʛxW8 a埜kWO_az eNt4t9]+N_f]ɕ"?p x7vڤ]{4neiؿslG__3 Σӷq6>d4>]EpoC<+IxƙH8kGnԦ &|4 ]fAϵw0S?<9(TgR!ɶ~|͂?lٻ8 pɦUzQ?7Yyp zBFJ ?Vh3#u{d+=*E"YY/6o.o>+^{Fï?H"-RG6Xo"~Ν/f_"a$Շiߒ"7z}tz7e6R:}Śwqv孿Әdi;գ^-;nv/R3Wd믓lz֯E6E{NpvrN+>bM?k]dL]_~t+:G ^89XKfnߜi^/Efo.ZZbwBFkzfX|ΛSs -4߯SY~^22 t(1fwZ{Φط+W?6ΫkfЎ%8#WKWs `V^^~E=_.E=Fmq'ͶZ%ߊ?/ɏ8X\.9P )ƎH1͕hn~<;y7N)4-8-:Ε)U %m Xe]բN K,+60{{kn*ɺr 7~b 9^ɵFCt8EA`ݐV[=Hٝ~Ö QE|AC/v}ʫόO7uoȁo mʊ"dǠ#@Ij4Z_8.O~]"%| 9Ŋ%oIEͳmP |G$ l,<2 d>}"#m$ni#iklݺ[=ڬ펯 "챸뱵!dS܎YΉvYhU=ag% D9ax0N3io`el2H:k,=n3qɬh5&7|,vdDŽ[7䠵.hlᕶ-*yB RxG3Ǐ'7#'j>_l6m&S$7n&w?^ w<%9vgJQxCr/KWg׈=2]42[({%J("c"4*֩HlpBlϪYޜ]nǖq#LL>-D}!m-WI E9#t1idR*htGSp+L`zRD_`O#)@%Rڐ,, W6d1)X*O'%dN!Ɠd y 8=Lerm8V3S9XL]H/t)9y]<--PIQlwX^6}jL$&01gA*4Я`4*㕒<%Ϥ;"CA(z`#l#M'`-xIhʴՒ y(>& jF'K\E-ЖQI L1>HO;cp'UMCu2k]$E9;t-n8fVG4_AϾ:(L=R(vQ?]§Ͼudqi'$5 }߆]&KMQ0iTaIeY F[RZ,A9|M'GaJ}5Є|I}YFZ9[5Š3JҺ3+tQqZ&1%b2N鞬KNlR.Y҈(kI50/IU2cCfi%B0'DJn%V ꉗ{1iPG5ۼ44YZIe@weZcֺM S|*n,Қdx5(aəj;]bgiaRhېӼKrRCUX4+Ĭoʩa?S\6v&uh% l]cP yׄMFHA|R\8)nQzL j|~`I}XŠ)퇈%B! $gHT38%,jW E !\ Vw#h%쮺-sC{O7r<ȺIN4P-ubv"lqMdp fCnEݮA ג+#NiRx򖄢Gj.NղP-\VMEE6R_[O7}VΛWKNX0]Eyg~5$o [ $4ܬ/t9sѻruλ|tⷛVSM]A?(f 4VbD+n R1 պ<C9s]7fI,MP0rP!:OkVþݧGP[X^‚50J# aK܊j!vSU@{{@CAO,fF0iŐ@ /!@kP9Ku9hiaZ6vs1ӝ1?51e ;0e@{TEq%iae+4]V!&H V|AVPsPy3#YK*Ugw\ք^\6eh.%w6_~q1S L%',(YP. GEm->,8m½8}N ܀781_s[|7z-=i6ɜM#+ģ'۵?%]݂tĠŦNAOoB zbVNN>bʂLzzIBvˣ%$}Aۛ4_pw$=0vԈx$X6b&.t4)h۽ocn|qf o͉y9gbE\ /׋C;'x{<=1K6?RVBL,IGŔ?Ht>#x-!|t0 O|Тui?OR$C诌O~Э7\=Js;ɍr鶆Gy„T'|G"xťzG;ɀJ&i58.Kn J>r} ZO+fKK+E[Va}`.R.kmz]o '!p=,pvݱ[i f"w5<-PZ"e>MrwUdj[W~G!^V>-.Sx^_cZܶ"̗Ņf~ /Nݻ)/-I%WmO/)cM,gTcKq떩[Lݺe궹L-1EЂ5F 1E5`ΌAV.g 9W9ҹ@ixޗ_W*$]'KKjSWǔyg2h~?&blϢTz[RC(u$y.l6LmV D؅`gBe`@),KqB&BZ չ3@uB QGu7١5 t&OR BS2W5Լы@4ϡB rZ( /ХªHssƿzufC"[%s0ܛ)*;3&eT%2fҺ"W}V[Up!jlߗC{aQξrj$Q+9H˟~rb>NOOC*A 7>^ T HgVBuZax2H캂} ~S]] *ݷ$.x=Jdas<ǤFe5UIɥウi D9\pԋ6퀬LJ0u ܷĚƙ|s8⠿ap:g:>~GK'A"v׆NN v-kلw.^0gg}\)$T)hhDVHjrW@!8:q*"8iŠЌ$Q8[#WψvqHN=]=Č^qt/tR`L b90Be PR%˂ kdʩdF e;<KzqT "ZQq%ql$il1f!F[[!s@@˾ʿm2VT1Jl|7Q*]s9zN|Y=.uM4,]wqT4Lw(Pv䩎 KC*5ײ1ʫ\k-WYIO+6>$䕋hLQ4xhCf [(>Gv:ԛv v!!\D),{EN?<$7?ˠ>fÎz ~ֵO]Olh9sGN>;VǶBOY夷a ZQSb7ü\2u3# ݢQ(s <~~Hڒ?M74YFK]UY ]-îu2m7|o&:w1l2")!@G򷰢a3<$<7AX7ǔl 9ҁP#9+/8G @*gtVuꡦpy\£|_n}\~ҏsU?ӯɦ[)$]t@lBĀNwNU(")!X$e8]o|#[CO[[FRX OWL`/w1xF .VeinW;VOl}cc1z t^Z<ض> =S+!>wCNiϐTf3\f15{1f>nذas%d"HDIYѻѠY[*JjJ :6*f]XrF\%t(8c=R[+:+Cص^%i&iM~E\Nف0l@FVm7܂G׼y d1ςj9-NA=IC#c[r%n>,-׊LVV:([xDy,:( IAi_H+ٵ?HdKVn~4x7#"v^.8A^ 3ǛɼAO-3xWV#dg@tZS]/%8u9f_׳ԋ%gq $ۨqJ -y]/:m3ރ ڃ|L\7zz;״C Xq݋8tauw?!!:[On0`q7"2)`Oi@'Z )O4о*C? 8$j+{|3s+!c>WzGC*0><^ŬKe1bԝFnBwc4`7;1f'a1vrm{ZtO̓OحD{iL D;H-:۽v:|JNX0Y@MTyJ]"x ^):)1(d"@1';n5Np:'!G+Lx~I|Cv5FpՀ:3*x!kRax rs :L]BTrE:!S Q.W2W'Xvo5_D`5cLY ̙C)ȈPB sVHea֙rԔQp&O.چLI.2WǻL;Mls032L0A 8X% /O< V4K .jwJ 5fʯYLUVci,1V^ȲF̟f1Pexռ*DY̼$r FuFhNB1>h7-Yh՗hdQiZj-Me}нeOѲ6<QRG"_4\;Hw쐾VX`YG%**=v\O̺v'N!r(aV `!l˯ȕ!<5`^b S;χqg!Ͽn2R $X i*2*!5dit Oc[~* 6'LMBGz㞈,ug7WU-(|_Rps3$'{ lsUeDVvv!]ez֋EVBSTtUq%ꤽZ#a:7Z ]%͋ưqQ'u2+n6Ep)/SJpGy6N> ]'eyH{''۹ y+$u]oSzNQlu^xt9N q/jD?%Rm//>+XW%V#*7FppVzOuQD\&c|:q'fz?;]b]PKhC6w(DHji o篓ͩ__7^6#)ElL]fۈ/v*k*\xQCtE|n5fPk;%_%ooKUrI,.[Ϸx}e"Xu[|97*Y<華Nat` X.07et/~A7.~%#feo~NŦ<YLgzi,z }ڐArb/l6WEZbbM%HXZ լ0=$r,ۮݨ!E%\QS ]^>kZ]~_)¤$S ifrkHBkhcu߶o>YFY&=A^i}38%HJ+>CxF7YOюdA$>հ5>o(yih8J:2R%TҤAXN ;>h<&7:HY\J;\rSoAv16Sr) Dҝ ašJYGu@#>GQkLRV@UԮ "%FTP%@},,3O7,6^Zl݋ *rj[qU+Ӵ͘;)MKH\lqlM7ʧ}Ro,aX5k)_W^iG78wb r=j?' =qb4Cg, gAY%$Pv5j!]h\p@")P]rҌ2!~5 =}4m1knݬu͚ia0 DHPxpc J H} (-")>3~;eBQ?{OFҘŪ0F۳Ό^w@`Lu$3ߠ*d&3S)[FT2ƙ":b$lN% A,0aeQYaɹ^HxP@Vn[Zv@l%!n6A4+r sVpfB eȸRMb !NR]co;l5a\D-L,/n ˋhJ3u2= :&bxL-|;S3(}ʅcV7Ӊ%[ksV0,RVkYtw LJe%VJ"xgԧ Y5 GiBw:KTaXP10K50Ԃ|Autv$AFÅ (&txf)C؝Zc%UWP^ПPVP#w- M.1R;ĸ}E.8u/ueOT f2I.3U $X[&+\fwn}iXkqY UR@ T'%l%K֥RR!E{) fJeSD%# ㍢EY8-3ƵP@ȩ$m ,Jy+PH\Rh(UPJM%ѦBxJ[p*Rp. \THR+D*FdmO};9, jͫ?? SSzBQ[<)-A&0\J  eT!3pG`pc澡?RQtdXY9gg.b@[9H@2lrI4Rn6drA F|}!"N4XZÂV<@ラ*qvr(RXXmi '7)w9 x܊I1٣M%pOz,9Mp&Ӊ>ZSձ|ҳyю@2aj S`|ŇmƁL"z3*w|# tCX wOX\5?xaK6`8(cn{v0FyRXtI9E]$imEfds(IdyR>1)BjD[0@Fc|yɰ=ƫ5vq^spŹNIJD/T8SBoiz-jzp|oL ƻn=u>tضkAdg;y/gGSs)bkγn$oA0# J- \ q:MzҤ'!MzҤ'qt8X)`!#C3) Q8l- Yw5pX e5]1 k Lw5Fuw¾$n4bJ`̱0WRg^Ç./Аqh絇]z3܅x:s-`<.\(>FKa88@:|O{.ߓpė>.,(8u%3{@1z1 @HPCr]zC̅d.ޘ1ܒ h35\G&zmO< f$ ZDv[8:f6[D-1yƥ 0HW(r^t]ꐂ5H.8# ~?+.F_]f@٢c;ٍs VB >SXLR;^JrhBˢϛRŲtR ']9.nfZ`rY0 V7/o_-KwꐯuG"> dnP)cgVpqXIURܙzKh̗, LQa/@Ӳz,A#Z461Jɜ_U{{+ 9UZ aYڬrPt /9}o& k1{*lp1:$ p581ӽs?m) LUtQ!rXcP%"%3]^g- 4}"B|xBA!kDkޮGoN}PݚNP(qDH=Lo$W l.#uE0RYmUI0A"iL{2HSTLx-crhogsn7Z! FW,\KYiqO UT[PisnP!h^LCM1F懙[UOǷE헯= )< izj!#hk_oI8>ƐQNPw Alˌi.lC-uiq}aaۓꨶ̌;D"tX{u1Vkow?/.h,lCS-a{-.^nzɢ!^EO7Η[l>&4%ca??1Vϼ={mm."[`e. ||=cUǎD˸oKhO)Rk_DQX>(TVS# K0ڊ(WJD$GW\" rרyJCro*{ Y9Oz_:@Wdz[뷕N<2?2گ۪(CWEr A_>~^{-UC+A*; O'NFd:vew&K8aqBw Iz`bJ,1?9xs͇R$#{|:+zN쐐jv62"3g ?#3єޢGN:Dk3l$+<#PIf4?ьӽs?=D3( Pt@zF|<~iH?@41#sex D7>2]'5ۢζxB Q*1d.׺Gf#q\ۘJkҿIؘTb#  I-%) 3vʦʻ3n@I=DAǜ!J4G. [U~0q֜H淌d.nF[Dȴ2dMV 9x{_g!(][>ր;6.kst:!oz!s܎g+cs!=[|_o[]$VZP:\0wDOb6&|M?.Y=8rWӺ-OzuL_\_/VK9Iɋ/VklL|P6CWW!ܠ )T?~]?fl9Ou3 lDU{yrcn͇e^cy軹_polK.7fz*d$Q>vɍY0hnL3RP&)jNId4+0f~iydf5#ˑD\\^!1IM _6_sYךFÑQ%C漰/leHʿɮO#zC.֪4ѪGr\r lC"j^ [~)F9C'-P0@'"2j(6*w509&~64vԋedX!ŢՉ> 3dP^{S1)gkN|NZwE쏟xqs 䧶0|0emxP'9Pr߾:1ӗ4Jc|oY/~zU~ 5|2CUY"^pIiA"y&bach۪kPxQբLi452vq`#TϣqSbr@\vOOJ#E,NKu?kMtW6FY-SAqӟu@g_~4QU5b)B7\G>4.,|-"]kFYT2L*7O ȿn$ĸ_:4PfU߄BbK4Z"U3.gW?8.lK_󚔓@I $rr;+- ^6(pQ8O f&S tΪi%Wv_~-//GoBE_(wՌA?p^~^1K+EYTЋÕ4$L[}/ N)hJ3D.ߏױ.,>3WPa;?{WGn/Cckq/^!@/I u,{5 3#53/3HX쮭n6U=7}d"Adj&=l2I0 'kuɥhe9ܣC2-M=@ ɀH_b.۶ɩ,5 nкܰ`iZڳ)?n}ҴS״|]ktwj/.U [$opz|Y3Iz@)Ç^{PkM٦,76ʈ@Z O,- }j/|&'&%/# H0Bi_|Te4O'G9J}rY_[h5QF D@ v3a~Oe+O7ןo_{=?o߿9yҋKR9n+O+ VI i Ή,UV,-FE&lJY '2!]\ 7~=G n3Ke2rB549$Pf.BBiiE,r]ž5hh;TI!?}7 ziA|^7Okjm=Qx4׿5-S 툥G&i⢤вzr I+UFWdڜa.!֍wݫPE1;GnVS(҂MWxQ۹G/}>C^%|〥JEw/]uaSWF2z: w3puV$V=]ZdFbqWQO?݈ekx>@_vwkI[S̭b\ d/xzrwc»]L\/]-*n̜y]?{4;㗥i+qOOf;$6%lˣ3SShmfkk%/Dnӫ˫˿muOWZΘfWzr[1A\Gpt]>g l?]+p~Fr>㮮*|>YdX8->' ")n>sE*w V.5.+>h geJ)qVY{Jdz3-O!":5^娤ɘfHt10MOmޛC9X(KI48rc@c4γ$` ,%>ʡ"&4gP\bX^qZ[K"J糒B2vv}Juefzߍ'fS3e?k5)9fފ?>pa[r2 M/MGRקeaY Q}Oc)iJZ~g;JhC wo||;$/n hip\EG=|f9;#IpOO>}syIMqm4d Uԅ+UH{L)*zw9$raIQ•rE^<"mfga;%bo(3ĭ9&׆-f8dyCD&ۄdWw[, ɔZ8S)q3jvlDCTLd'k$2]"#r E60ψd `Lc,"-ݥ2KHW_իsjWeUf;s]D(uTN'FSeP)feUyDّJ^S9_ Ñ+hpiQn,)Ge|.7*ZNh!Ut Ńc|)JjbŃYGHGrm)^uv[3ǟ ]vL7&f{SA5T1- /uC[Huj}%Ogo?.9LΞko\'eMփ݀@I<1 d(bGؖG'Wh[0b" kv= "coo'z£w쪏zOOn u[Ϩ~9h cEKcQؓ<,+Ώת{]eJ~chqPb2Z(o"ғs8M au0Fu,*^9>2M%D6hËLf B*(I,&NcLj4) D5JA=B*. Pxtx#܆]O'3ASX/i,C6 1^K"5&& \&rK D :$&^UvC4`VQ-n h8ӜY-R,ʑU}D 9EztGM:̤XEF,.VGf{fh5lwQџ(ɺya+%8CnKCmQ_F,^DxXE`7Zľ'& $c" weߢ5 ˠ56kh ޳q$WYi[ /w 5cYrF&)iHÞ6#tuʈf`s x겓3i\)njӕ[a첦]3>cwZ1_? `ᆶO 0 ؊U"gy- P=h6jr޽b?:#B 8?6\uL=i Ñf|4y~ EYY ۏBSݺo4]> MT:Dt6tI{I2vHoN!&ShS<Nz:tSM v"]h7~r4|3-_}{Ve2&`stTn2G$0hL''''7E9QG#%1a@C0[M ,ǭMָL:NrG.T4tXHث[.L6IIt5Ľ/6L%wa[oC(Jc 8, pjC[a6`X :JfB_a*I+HaF54"e.#"1rL{=BP,c^^B0K"d"ڻ4c^;어MQw5Jw t9c*عW:T%iE7+Pu;4fn o,+Cn!9_]{T!?f΢ɚ50zSVѣ~,b-@;ƀ7}Uru`YҼ7RxǓFYOz8͸g9O8߈stau_"Bp.ӳLCA6޹הB A {NfMjq 'Ϩ5ԄPkh'1k!F() "]d5ZaOQ*:Jᔜ͔2S )(+SrH}  `xmH3WV3TG~ִ J2@FEHSe*Ծ₈J[*T@jʨ­MET:Fn>2y 5k|Rm-' ţlQ5%Zk8Bl` G䶖`j |>% )`eTbhG)~{pOYWK@qEH]uЊJև?AZ[J sy[b8%QšשWϫ/7hƉu|Xg%&2Udg=]AMGU{Vs+BʘQRTyq{Fs,][? (=ᷫ|'Eg3P3@`VV9UmGǐo|p*fx8+ɮv9I%A6G[Vd[8 ]$Qm9Y~@-K[Ѫ1n*9lPḧ́P(! ݊to#,+'/(c)1xYʫo#T )6*_ح~Q Q jO1up*H؅i*zݳm6t)x0u!Y9gwiSCΈ" Li<>%heAq@n{u(_҈/-kt6:TTB{NT25NY ZR]6njیU7wJ}GE(zeւoEl`yS^| -6S=.xFh?=~+ 1' T_uoCt@)Gu_>*h}ej")}*G3 'jO[B`'H~}\]#W,Tv""R? v}\k,+m۠_βm& HP]CQy=lPS\q^/+Lh1:%;٣qy͉6Fph"\!Ĵı]p n c, >3nLw7m%,kX3xT@x.LfVX( btB!j\[ |3|~#}oB~\@cZg{eݟ@Ew*hOa$K˿?G#~M`^N||ۻXxݭM??վS8վS۸Y6zR,u To4m"&A෭*2MA(FGLeww77j?tw_s GU 8] k.bcѯ$/;SS>I&ak>̌wi!XN [,4| (Pc/e9[t[rsܖRmiFC@v5n˒r-oNQi.zoJDM]$u ?5w"̢XCDgjؿ2NA>874?G|޳B$8fA()&#J1e㵇l_P/]|w`?]q2NwUͻ*MІYϕSGe4 ` EI(o&WPvMXJ(fQl(]?J/ Sq8NǩqQ*k):~Ha~)PVKUPDFB=U҅t1fT58NUT58NUf`?PH"t_u\Rљa85(beGBEQ i92?ESq:ESqNԁǠcC=75Sj"s* )hsgGBEQx> ;>?_ts Q&9x+u6E\\kX9x"&sNat9P:Imd:C\" Tj bX, jOMmy &qEܰA\wQ8tښ\Ҵ,,4ï)3)4?=fiF_f<Ň4t WL@ApdpB9aj6::/2Q͡ME-h:eäF,`# -a,F :ڢBUQJ ƨLD'C .tBD1 ';a#D&j9%Yt"'I#9J:t]&ĥ$DoY R"SuGS0w^"㯀q nX(a,85J5k/Mp(4XtwDayFC5Dhv @m aMDQ^"#㳄i%^Xd$jpJQ, S뗒G"FZ::@u舑%h3TVJ|ݰR֡"+L-a\*ʔ KJYiRQDG޵bq/M1JF7ٲ|Ӓ-3:q>R0nRѭ v3 =~&S zOHo4py,%U1+S9HFJHйSLIkF_jFD(Sk0 ȎPC"#okt&°Վte_n]h7[E'vz5vcw\kƧ76 V`jݠ4F0d:vΎo#\fM7'@l;T\ժ2T`'A,ju:|/zYo&omM$9tg5:DyǜZD鰈8to?Mdbڜ.DB`*"UQbׂj(Ɗ"Az@!+ܛPM $0u~iq׻ǫAR;ʄ"/[ 4+.QQfuxQ^)G< r&F31~ӊL[y۹']>x{yn?Bxߌab/GwuFw 61J oqK̼&Hu>g7~)ٌF5 plf'e߯>lӇ zD ~C~Zs@y(_=/И>@u!YtKz?|zHx3[/s=X3@زmO[{]wWkF\RZ-cfξ0>oR}%(g /kh՗ * z9نjO(yGy\C<θ;cK(KeG!jkb0%"WDT5Yo >O._9dN8Due%׫C4fTkWv{kW.?<|+@)[5)M0gCG #]ǟ~eȁAك+J_9_vo%f(A(a1wjup[ͬj*`8&a^9 >lJ#N$k-9'aNBm=];{zpy\ UHQe.2N==xce Ec][b4㼹xwcQ޻~6-䳧W!G#.s:?{6ᗝ^Ux=$F, :ɼ,U.vU.CVdSKY )/F*h=.A>%Aˤ8I(vu^N1/sWK)W\K*`<# IS? u-Ӂcj3v^jL̀_|y2|^/5Pi5w*^>ћգ~k\6Y~5\a^&9|}|cɶt:>Nf~lq=&F>r囼T*N#!0ͮH+ ɘhUQ+WB ~+EÀGw;t˻fW8_0Xtn=%0B͘}.C8I0"~$BE4R3eI$<ԣ# !Z3 Bki!۽gr\`==bW9IZv^NN+2ieN/@崓%Nņ|=jty`h-p9WW&{lMa*]zCT|ɗ3 ]^d\0Y29 qȎkkB.tP9]̤ۄ\Jd,z({'u1]^Q&<Ӑ+h.Wv痩9FI3ld-Ywϑ^wnخ}|.nx#ܥ=ﶽ7ߴӠ٩ofZFp.(hg [Ѩ\)$X/VUp jsTNw ۟W/&;\; 4f`t_7 W$|vIC6N `eJ)4RP^6{244{+FA݄JjQ˘ՍO|1Z`!dnY ;Fw58r#F;֚'Id]۸[0,g&W>5۫eha^j>Ati^rsi2G'%ճ7,X>o|A?sMY2rd3Ak4)8Z/͜BbVaa.E;\NQO0 r دzhPSpDFSL,05 132^ָ-Xr%=|jԢIwlm/?USk6:fp޲>YZpذ=Ci׷LBK0bVD%᠆p+ RRӌ.˽KtJR/(L/s V"[Z~|;a|=l5I hb⬘ǻ%&a\[d@\_Oc_#bw(,7BJkl ǯ$x{͍ݳ%@ тDpۛ7eҫ$aJ/,y(C#iƠgܿ/c8|G<{?iZ/h7cGv̪㙿z}6}v~R?ٶ*[:ߤda/f>_"RdOx^Ol-f!5i rĺ[ L={|GC}K5;|&loTŬhZXĶVcʡ+j.%XMZ}յƬGSuUKrLG&Ef<QɚDB9taYjst: eрZJE<~Pu7F-JECL"Q%Ks-Z,6ؼpWo2![߭+*g {h7Αŷ֮slZQ_GSUΩEޭ֭m:-*+tXh{k쫺Dds!ܧo~+w[c,E0[JrH;W,++VkqP (d$d\9@dcS[Z"7kfh27dT+2⤰$~o!5e(9*[*6Mҷ멇\O=I p;lR_*K)iv_ f%M*xFzOU=29ZyFqȠ VàjK}5l W5]YεCð%<;ذɰ|!^Z'x[Pl"`bXk g6[tT:n;{l(ʲ!ퟐ2CJ2h鄔ɯ*S o3'd}_L|Lmmd̡& . ¹~|EHᘫPEA}G>Qf be`_:=v!Kǂ/(4NSD IB"X!G!zpSFO1D!JT(?"ETDDjS&LPQ‰dz<g zz) j^ mV&.~u=(Y/gȊ$*d U=R/HDpBϵ-k#ĨT$0}G(8A8 b8-SkX|~~M w#UDO̸|X5SK6t,ZG֜-;św;\T*(;|}|`4^Nq2Sg1A(ː0qG1g*(( Q$X$(c4 U5S4ʁ=qn;k 'JI*r,iA$S )9#T>[2%oyqNR<(݉]=b*,8$@/{Nbܲ KIHg9FDY:f*&92nVkoox~QA&]kDQ"Jo'v;-ѧ̈́g[oh9A ?ϰWе؈Gѕ|,W ARdgN .g %nPY0x,6;^^L0N@>]OI:OHRI裨BZqFc`bf!mFȟ@Y[%DESB)Z b?aU`eOTqբ' +qi3:Ome3HrYpР|zwUwxXlge7* 3`Ypp.悽w ~V~Dgy.dx~UG_bL~-]Cđ >Ա7͜Xl;m\aZwZ;upl )k=/ j;kX$N/BHa C律duR\\& 4w4`z5[bn1LÑR!TMOk/Pb}={7D$-Lfthm#f͹~.:=YZP }abRG'#.=Fc @21?>9i$gꕊ1 D%(RDAD,LPy%y!XsBxn7vQud[bL98yRWZhDA$x!VBK,Áb!G1PT,QGiK`/YSLXӬȳm[ 2#g7dw/-Kë0nxʠXwKØxa!q{*A8Bp4p@oru`,n4 :RXm/aUр/xF?A720qqInm_ڬ16ۯ"x$)iNLQ!^q\_|n.sRttej1rOVjp#mR'*OMdbo'RqɦT=w47/ZmBb;;͹%uS|Sr`ic2|O @ҢX{=ymU_sc\-LjL@27Xb WX6;OC0\Muf%>mxǩ*gۂh` Z!~wڵ!L@p+ $ /?ͭf^sљ=a6[֬RgSgd=!wR<1*wJ5N}FM*|ρU5g:O+qyIDqG]VbӱO;THCעe:~UQJ xi7Fʳ+ a/p (#n)|se껜#Nz|yÍwjQ{i0L>PR˾<2Cx4Rq')R Dd4䭉.0RB±M2ߋ _B$M ~ѥu,:!E4^ty ?߾_TOD|{ or :MtSˬQ8TQ"q NRKYdQCL [(@XCq˃h:0㊚Ƴ}rrZS$xYZ S*@k[XƠ S ֯gG8y!ZIo圲D! >&!@:(&!$ z>hpWpjha JCHT=i܎zɼs2Kj p2F(#. g@md*U3B=lGªj0.hO|G^\2px /$$ۢU7Iؔ814wHa }^x7!.KQF|Ai{mLYF2I`SΔ^Z.rEҊQ$Z*ȭ`8*Uʫ10V@6(F#6nhʸR Zk/˨T#L.X@hSG J O*D&M!os6jeP[8< kmR>sBY{]'a\~Ub;OaUXLڪ8ƚ@a]WzScY3']<9v lL|>z̰MUtwRhM:5%ANEa +=îe=^Pjg?E;~3}޳܁E|+woqh5Gsw<};ұ޻3kP՛ ^⓻sTpƞ0@0_ !Βwr {Q(Wey!1x7b8x^0@;Z5E۬[)f L{nmوLE[$sCJxح Y0Bٕ/eH.@ֵRnQiQӕ^?TEY~SqOvG\^K3g+0-\LcB8HH0߃aUB@|?PY)=q%q (Gmm wϏ%xT^ޅu^]~8NoQˇ^,߆Ŵ|go1.jAhŖV͙fFcfݳ{7e͐y!S8B2;F4bjC11<8{|zKn2Ǩ1v-?f'VOJPEOZyl\[y!'[*ne]:knYxtt|!08b]6&eC2G Y bg̩$A'x!O>"D)}"UBzA3 QW [$6e4"%*IC:jIL xpK#u-EF] ^cDzpBYD"Y ARoAo0 #c.`)`ZTQ+jUW/XALѡٯ-$H @I8R@E"4ނ}j(5<*҉6+2pC:p4AN "hGiwYI4>RP0Bĕ'*5uF g&Ȃ$.R&|0@FeCo]2p$'#$/@[j҅!؀(PW) SЌHɕ$+PDHw.׳pZ/4>Ќ}cnsRP m) H0*NMOYLL;eGMU!D判h)YИ˹B%ooC{Tfdy&B0ng>Uwp<߿eok_w}F.a xll{ O_(Os ?(z\_?tό p;'Jd<~w}J)9<ꠏ8u,zw_m H s#8i-ڂA7e4iEu3C9WN}C,批~ؚKOhs.c}^{a0.zu.s?;<ϼ_|E Ӌ2Y vڨ "xD\C AD\Tx~:xD%TA%J]T%t7%J#Ȕ%]zu%7!+ˡfMy3G ȤK529f5b ˣ]1TΣ==v5p<m(1HR50n\;\`"9F>53: KE ;2^֊Z$I>(uL;p$*;#ieZHQV yH;;-cn?lRi{8wMoiwa>vav!v@Jp2@Z)i95⻵ Bȓ`46RSQ2qc*m D!S Ztm ߻C'9et[pZ# BJx=T ބjN^тtdX}5*h$JaWQ僣`1:%[n  8!ҋ@jy$Ē-1-T3TaS(Nk~` ^oQXLG*e&o9n%d C فA); O:z/Q ( v`E{bu,}#!&iBH =agL`K݌G.~^LN.ܝfIl65VZv-vTσ/ӋO ň&> ]i4Nml C-YW#}7чcڏF6ۻrj4A)"]6߀uc9nk16TvQ%"I^BLQF\J@(u½RfJa{QfmBiQPJ)"M)4㶰 .3m[mDJiEĆ4jdۘ#ζAFP5Ib ?SI4m`5j ̔:t |"t;ה(Fpe,,x)wRYze my4r;1!m VS^HBS31v9? 3}fg^ ^FnuKWqSZ|4w<@@7V{kvH"ieyXrt/&iϮ24Sk&T&n)a(D}n݉8%ە38`xyZHteZLI8=naKOTBj?-=Q [ʻ4t&C3tkڡVyۧ3ҋN]_m[ 2ƾx&vnxbNpk %%u< ZAqU8"<8m<05h4䌖׸#S]F =ڲd ( p|=r8,;6G\VBI$֨CC~`{9)isSd-c+gj4 tZe*g*bN2>u.)]9(9sF%FmK Ga-~[N)a1c˫f J˛V4m3`hyatj;b;M~͊,՘E#7-4i.h`1Nʫ S?eLM~ԬxNLB 0u6@#Mb1T+m.#ԓMmvݱM.&fŧ5?^%0guGUd%ͳ zVB\FW`qA' ?|63jkhYy&m?,xF88(JCgJ R`bL(V: Pl];%{MiTT뮫rAo;V(+aXdEra YW]SX)㒯NBB zk"M$G#}}dSdo=f˴2פ<-۫+%eJpe/2_NJF\M<|s>s4|4T8>$ȷL2FNկ~ON'B!'E,LN O7>=2үThމk5gHyf-yKeY;ԡ2ڼ~wbR_)0uhTg-A)Q1\5RaKKZs|Q.*Pjv7*{E"ϖ<Ĥw(H#IʚBD( k"+v"uBi7a;cBNfDuү 5 cR#NT 'ر0Jk9J B/B2A6 ' MS*II܏+ys6C!єB]HMk"NWZMJQ5qJAY4aŀr frw%_Z Ym\KFYBXroqkPYZPT&Ŵe%@U`0^ӒE%j́[xp &lZddkַ"9mCqR6Zeu*kTGIx2#67g"FFH))U c'3C4uH7Yۆ}R[Vh.bngּ_mU0D\}6 > u!~+)=5pTP"F}g;H*Tp;KfԋVK ;ӎuɻ@6m5uAeAV/% LDAcea:zXDXi9˥lk;TSOd, "r؛7례؈7#@V诒=[RZȖlAkW8?lFtji_ۉŸTd6PQbyN~^AXgzU-k#M_*|Bբ^Xz2.͂=8DJ?xi늦9L@nRY~kD&aEwQ}a$UP>Ik TݖmϬw/J5Hv{9h|H uj`WS&J^(;XHޗIXHv-XBBKp;,!M4(ul "`kAvs3+nסu7ɍ^l7pX8p|HH[)U sp\؃'.Fȕ ٶ/acHEmr0G-pt Gq:2A|.1mqF\M<|s߾~: ǁJ_L G:oY~YBO(r9 ]B4VsxGvYwIFYkG_:Wf&O8(Gъ9+o{SMPDL͆Y.M7yºcc5XQʪLh'Iu+NpQgG6 vO=z"{GZqx4VFWT!?'3)OfS"bxe)< Hd"Q"Yj#@2&V{f Ē0hBp4>Pfr"p>K,".Ɩsl1 lm/pj†@|8cPVx\ #Ⴚw*㹞 hΆ$8GQgy5T>f6-7z b1JcNj6)oN>n2_;_jz C?#Ojď"Џtօ-}+YNZiP\(y/c1^{a@lU8\z(.?u ;3~v~ysy{ɏ/]yw?oϱJ<óq]m GކC-._t찫;xIVzE] uVv 0+y<Ec8.wlڳSU4Uf'EN{xy61 g'x,2ٮX/0k}RJmWlj v)&3&J2%J4~b2a.PCprqy{^BE^ߌ!#n G>`毈G_E|i1sؑ'M& sǘi8X/_ŗqh۞jԷ@ATXRbWn0 DpJՄv dA> |@ì?Z h }6cQQ KbMzd=8~w%_Fvdb =ZROTHAe?|'sM=vDS$iS"k ځӦlSmm4mAmM8qav6{uϕmyЦmNf|yAAu {o#g{ 8 g߿83WOߟc/t+L޼K/R~~a p|x4E({qm[MwJܳpzJFSJ] 8|5/̮;C +HM$=dI\H%:DP<(ksC Y*3^Jk,wjpgBD,gp"Wֵ?b_aèd9wZR3$0 Њle&*-\e >ȒvæPBo߶v70hn}[vi fێg!¶l lwOLD bΉnJP?`ruq-Ӭ9ñ~\Ȃ}UN71 p2RYqEm0hkQRT8;+Vqr'gQxw9h$T3@2[MRZB4A5IAP[FCcurPǗ`pPpu)ZM"ZXF8^Nr. TBzYQ0"a87uAAɰ@ M,mYnSgl_ha,~Ԅ+Ųl+/3̀[>ca4ͧ8_.R R9=i/սp?=>& a˯ٹ_IJwϨI"3LA>K5i7wxOhнԊhL^|Ey*8Yӷ{7 @P ҤYڎ5ê$ IXL˴ѳ`J/~8-ѥsG:.gU7Ebp◲1tEI+ KeBF3jsmFJݿیw2 w./B;:1Pq'i߲}\s)0F\9FWib+,SB6IIu蠔ԤCWPQ:W蔍q>8-7ܞDu?|OO@F ɾM7L7owZv1Usg0MM bjEoH!238*&5o&)QϘ>;dS a)~Vl{OBxY}$)1IۄCSN ?Z2851Kp^Y ^g1T'Jl9FG? 56u}r׵DVD)rmP90Mېxq]N B,CQ`AP**uA`Xe"ڥmHsȕ, -,wNRA tZ  -^TA %̅He|W?|W|!!f-1,(2x)h2I- !dՄO9|uO'*~pZ&QŸJHkyEP/pr]$ Js0j SB+uu綵BO |V3IVNxؘ+m5K%O{;2u6+kM <(%6Vnh[]6k&i/6Z,qƅ?iX%N.L#n喙L9!4>6LveQeX Uէ2`*R)U+rS$J1 ?wWL(|# F2ΗR TlUqMjh44?~m*ߥdr l h/P&y~ߣNC^'mpSʼn|1NubNF핟hKG{M0 V̎&&f */T[; 58 ]=Hj_Y`XmpjcpҡEWQHj ZNsH( kTE.ĸmHu 5x#E{+юI PxPQSXPǜ&԰v*/Yׅ<ÌxDFb֧ҥ53` PG/Yͥ`V =Nmxlͫ=($&Zӊڹ+=9@U<.5P 1dH)3 WTSU-a(gќUV;f)C*'VZ@.t!05'Jr׉ssS*ďыHы(/G˅5*X08j8yRd9]Oh rSNwBvaUwVa4W@ g߯flR~~p˖V씚#3F[ZW;|مagGkiLrd7!p8C0qǹnd?,k|?,^$\S4mjS,'㩿|~v&ٵ~)V?w]%o>-"gKBQ©# Q!Z!nI!H*t:HMb`*k֥a6ѓͨgckR؄ƌvlB+DZ')PB݁-CdZ>]ظhR| a  {wﮯ|=QHe Yi,M(dF@5@Ǐ+PLO"/iRhG3ևv46$YM"h3 Xz,\J}?`,^*qQ"=4GÈ萑 %>ZNt9hnʪ'/LhpA_̼Ӎ=MA_Ҍo-A4=gi4g<4HhciiltE_O^:iɿbdOō_1Iԛ.hӘ &{.ҜE# -6oֳ5.DOP)KCWe_,Fo1\%}䭓m?|y"@C_|t3r׷7xs7W<0g)qUz9x'E5ƩI3fqgUmH> 44dʐ嚋WUpyôPđ9ySv!i 7FP}㜒6z V(h x5hRg+HDZq8Q+%Z)PŐ'M/@+q-HO5CE(on"ucaɳO,8ϖCrofxqoC3M$|LjNhty~P/Ka`托krfE;BkH z/O>(R7 Yމ#(Uyv]D5ь_RQN%d nJ`Bu)-Elhf`oPh \vJEWU+e)#+M[ cEUG pxqdԛȢg]1rCGPv-ӪUG%7;Ӳl!Cc.EuQBiCc#,:0- _ 9/,m$iQ =c*QEߺWHЪ%C"bcDDV`ZJ g}ΆzE]^#DRWA4RK%&oӾjSPJ@ϝp&Z._tU`B0BL\0HkШ:\ eϹ6}Lj*[XB6# SnynaIv=>AԑSy(e . $di(L˘A4qg%sK"s"Q[1q+1([BQm4'9]1:.QJ- QV5e0|%Hs4Yϗ8X^9xl`-ǃ,$,m(5x9 IF:&q !AKzQ)Ӊcq噆ïӸT+)aa KIq‹:`{>]{/W[}=y8/Z.?ưLZNxyFOE?`"oc0_nG~l&{Xxptx` |f^b]Q9Ҝ[sp mEIr%׿r+w7ZޘF8&G"FjOaia|{=d ZgAݥrb#np:T'XL^uA~uZ`IhL ,y𣰇BVbL?M=jeާ[4Z s:M_& fyKT!EHQQ_*AHJ"``<),5kK6EXяB߷"8֯\vt;yrB詳L],&f),t]l` >AL8'@zˎSDѡ:g#{L\$sZlI9E,68w$?8# >N"Sx*[sN @_! jSCiDG}v'򮸊/ǎ<Kٝ1 ǁ"),9CR =U,rफp9V<*pvKϑWX{ϩ%kz̵W!Yzsr܏Z$̶Xi{;.ZsAئhKqǸ` RB/Jʔ[]#p5:Ffd2eW5?"(!:-+ŘJ(RN9 TiCXE ]D_gN_#x)ݛ٧_paX.J^__'eU+Y,f?˲R7aĔŊ>q.L\xjFwSo1םƮ "jPb!pR"7bK^c}xՙQ㠍a'1125xDq`,DaR.rl?.5͙Ft%AQ )52HPH)ЄF%ZSCa܊aU&^TC`*ntL: bRVיxiP#L8,0%SI}Toɷ_G~ Ho 58z㷜rZ~CydyŽ$K`+1w3Ѳսy1/_*opS)5YkY_Z4U褐 ySs ԅ{%/i1-kzV2C(K%? ]G[,e% $uEU9HWb6I3i'b8{)FK$uJSJqǀ+/XR. z0t[wV.0W:zZr옣)刎:/Mu*M16S$ UWzV9’RҎ?XjH9#젣~wmJmuSJs{7f1| R˿KGDmTrlNY;T{s+۰7䇼 Q6d6:Qq%<1g ƴ8Ix,b( PzHCM`*j"* ȍXR={&DH ~]jEJ*ǻŵ)&_P=>\e=LﮦOn䟮`~7_}9=~eavKcruv,/RD2ao%oc-*|9w?Kط%VE՟w'nbsRoPJpTNAXbPQ͔Լܵwn4-22)E&:Œ3(B+ZZшa^ީrpPJڬ6'Ȃ!T9NDY Ua -h nh[Ki|w룧Bײ!ҟfow|@GdpEB3[l*´1S XȩKeZ(pݘuJ=Rg PԼ\e}&4ã~}a (}~f@h}PnHiB7.~ĥ F"[ɱgdwMyw;\|hյɩ[ۖŘN}3_l@hQBbamW[U* ۶Oʇ8Ωo_p%˴p2ړǻ$iGe!80+I &(  '*|oN?[9[#vU >~" m_npb@-LO'Wz:oYhU1r+J4#E`$yS7xSVxŕT_SSSEZ3*m>:>=`F|0 ^[ W99M%SVB|SFGv}S\Onbe٭szY_Q>7>t~ҋ[8#?d* ~J;᭖{ >NLX}]^mN\Pi-b&F%Ţ ,$~/* 3k!'C"]"XP8?"^(ABŘGIuvvyT}luVF|BY,u IyZiF6I U(Xz)1D(i>60dԌwo1j<ܹXƬVǩ7pqY;د0⋲wbP* @Z4ʜ[3)օ BZ1koxg:evϿ'U00弳?[3Rn'bQKfϣo'jk&9 =3"'i`A &j&Em2R8S7Oq"DC ?8PI5xGc %^] :[.s"x{4O \aJQP[(CdXڀyض2@^rK!Ļ˹"9P^5 ˻>*DxIT$OoH/cSETLD_"fPc.fHmMJ1r/wHHN]BBi%;nշDta sD0jYL%HϮ'CSDlIuOo\BA$Fm61vfBL *_t$z0q pLw/2 N{zɼcJ\Lѧ.RdžJVхξ\M盇-Seƀ>Qk^#4 IC]jK; o}@iPe1\_O}&jP2*!nOB2.`9Theľ x?CO+t%WkqyiKYp*ϣo,m!)-wyLxz1ʡ Tpml+e[n+܏v5aDXa2z ӔiZQYG5^$%Zzz>?%*,]':}{E#ZeaQn*/f9GjOOBɷc$П/[M%fc5 |JޟnƳ/{|f$)gbqqfpIfhlEEύ-ۛHmC I[W!'] 0D'An3+Tٺ}=KOg;I1AG4PP<-q8%eq ʤGc %\vյ[F)J 2A++3r@f $֓̚ 4TLTQjOUЯp8oR@*M]y\O,;WN .&A'4*VAe1Ai/ℕ<2B;)@ e0KRL>ic9񎊐\/x=$EqCw8Afypx2˜?=.Ԣ;{J!- JTk{֗47^61M srC4)!cʡkAM34ZjJ j 65Q0)jnk1B j'xDAS(lf5* 2ʭ <(D)|S!&ET J*-T-^6rܡHp- G-[AM0גFH)jO{ca58JPe6xG#u!lsr(SN](hъNI E:hGW䀉z0%5\Ĭqʈ:R q$$F%k&Ԃrmn^zHCed9YMټKVFz3rBH[^ -)U--+SñaPc|;H  < mZ?Q{=2$AplU%Ykzi|]=Ք e2=vtG/Wd g."ֈ@dLYHjoE.0ԉZaܹH1(ӠGe4W]%#=:˖Pi&b%Js4GBYˏ,^VdsrT|q8$Ao˚WEt(dMP7CbZ'F R;~o Ÿ_w^nELy>1S8? o u!wՎ峻Ҋ/zBYp꨽=G ?REoCubl#bg%+L:>О_ߎn۵PY? {>&C)v8R1?ZHʯYU]:+hx=mH,h[ ҁ!pfDdHCLVT2uTH."S` I$ 눓!.J {CfnĴ9/5zP; IB5̦2JVXSm&8L׬XW!Gp 5s&!4q. Lv8RkV- -Ծh!t,ץF7knӥΠ*PW֤z+NLXc"I+ɃVP["LޓKi1cBG7|.íR(bA#@MBTr!Cw~E_m/%es܌LBJF-g՚XMj,&r˨f#IR53..e4Hh[و+WfKkU̚IųUh#a8ZVhŭ'X@U;>4(dkTZ?W腟ѓLkPGv#._:MqĖ||){Q'La"W(CaCN{yѫ@~8pE@?>D%%",↳VhIQ+Qnm܃V ] mmVZvՆ3-Mm\BrC__v|pķwh.x1O,G0/ǣZ>xݧV|_ڋ;|r֣ȭ'vέ-Bέ>k~z\l&9>3G014Gp\72;g1i~9HuPZ6jsmPNA lxtla,d gx4% bS{*E=PW04z9  °AWe5<(8;8Έ#,;(&.PgKX!lqŎ'M8+g9>E80h%X 148M" J)(fj߶JN=z./G'[qSQpP"x Ee`)z#= 0A!&6"2K)KjvRڹ[T|k4As+L"nhQ{&,_rV)]4;^L||rq: 91 ~,QgYβD%~ˮ+eA0iKԅRZ*IVBt06&Y ឍ~8u:p{{=/gF]':崛H U 8 |<&m2;MB;mN 4Z*Sr2&*B=_ɱJ+uxJ3JSQ p("hL)8ĸ&3@1+9vVWMcn.nV;}0MTs]cl 0T*.~y2}/RӓyRINF }lD+bZpεEY݌oo Q. 2ߟ}w1q~5z5ws.^.^)L/)ar)J .SsI6DՄIp:1HUo4xBFUBY^PՄ8T<2BQM z;OTՋ1W W|'VK ոZJrIT^Y%CƤe:k|,Ĥ RFwseX! Ɇ1Aa1dg5 \޴^Kj( ƨף,z|e+ȹT `,ALC+?n +?.~U>T*yれ*?(pwVqbt;cPuZШz`vuvȴҕs2Ź(Nm5ځ>Mma {,D2& 8/MLV8x)|!RΪ!  >JI*֫V y g þÅ3M򇾘>EQ8GwP #c2:RbńpARxQ4\Q$X-jMG]!\.DPbg;(KՆ9Ӓ1jii+ȕUX|EɋYKʰ^A֒H"hײS=\K".·q otWˎ1=梇7y+?4?P7b 1 !&UB|* ҽ푖^}8$J vGۨݹbn"f+.*ww*S|*xzl&S6UCz/[zւC)ġ^vǷZЀ!Xչ.M q+A 5\=WpĠA]sPL^ r8, S@3ǗQ%HDؤ"3p٨!$JR-hMFkZ8ZA&4:-$Q"IADo jWFO2P-u== @ARLۄVjeB 6 zrn;ŐxtƊ*$N U͝饝ZJCJJhgndl]&sYx,̛°ON3i%QI`*Uh}*o:S2Q h:ܦpEI׆^m}y;O߽cJj湈RykmI/=~$8f ٴ'[^ILn1USLIIrIlQ_uWWU9%cRH2 9@l4MQ&4ۏP? U}!7?+JﭹJMA6h{i'#+4ډ(ͽ_8c!PߵĀa'>%L@e@&l)x T!K&aU{7#sS9xt6~HJ-G?)^0\A ,z "ZrU9hHk-D5Q%/*8#Ҿ?JSM۳:B,es.4EdNf%$sͺE6Ct :@C$vD'fiŒD3,Ép 4B&*ݬod&]9TjiLŷMZ냟vUꀭ޳mP⺭cyTfQ~ N[cW@=ξrН[8ώܷ,_=yO?0tӁ3Bzf:bzkIJ; {'Uޗ yՠT^0ZouV]6dC[&o?Yho.)('招>f7 `f\ ^K 8#p}=0@wzR\?sE*TQ>ss]]A mJU"Q:=<;Ep|Ⱦl㝸Zώr-}v_rJ6tٟVپ9vȣ۪AѺulg K:{o z8*;5$WE 57o +ҷwzֺor+@$1]kZϛ^u =W9Δӊ38jʻ2}IG{1ts1h2ݸ%TڜzOaerq8w{7W,wgƩٿZDhgd͕JAj ]q˖^ԦW_aG:Sj[tkm7&d W|8& b:~Wna>^mXgp Bx= *BAc19BwufQ 9z 9*mXE/0d >^dl D%1| vF*`%+;8Ƽ%2i}*j9T*zWn؋y7&3+{+#"/v; '> 4=YzlOҁӗ}m5Z]NP)wӱo48=8bO? YM&b6]\t׼73.tlo|ޭ'(I+u3=8npd ΄G<NTkޕ zW!%%W_[z1T Jf$s4I3wf85Y&qp`-}P ^L]ƒTŔB @ADfc'$.D:!'RG%I&DFI醇iJAq`IZNL.}c }1~?~]8h ab>~~<9> @ O&#¢cl׿|@5hxW~ ^e۷7TK)H4`Og?,$%*8"3zc<})|+r&5_q,biƘ"-P7Ƀ{_? Ơ8jtZ)5(G&EI5Og_cz94iWIRʌ)42H/ =!1)`9V+ǭ LEF3*If3\[AQ4P+MS{ 'MP^?zO )95Ib,pZ=K%OdR6.E;ilV0`B lwT ~NB_ׯx:=>yKX@Uf֌ )݊B̐[77.ٮC+%d+#n 4LVkqH]9Γyd`r^p&@l4:2 s!Qi3U(ReB)-j2Hnbz+AXO;9)!'(ՕN/e8CQ4M#&wR6ȗ 0ec`{pxzNFC4yBwe] :+z>w|P. +բK.Tr y%ZRm{Y-|>Qw~VCkcҠ;#04tAal@m\EbOOHHQ"Mm1q?yh@1T>v"[JNp Hu)TX2uvF)p2;RHmky5ѣ"! hP$+fJ򬨚HJZL^YApʈj `.p߱D)Ә8NA 04P L=$YbGM0S>2h-A9U$(A*Fv|9:. o٠@W ~P㲃V-f<|L}]Ox,=ѵǒAaMQ7lpCȨ7Ų1`7Hct@ŢX527#bYX:(rQXXbaLiVY{X%o㽋Go!g+b(Fd *Heϻ)xD*ylp H)Bm8;>̄ʰ2!,΄=֌& ս b>CNt!ca taXgU}FZth|3|=<;Ep|^{>SpHz!U83C;f4)$Gtٚ&WjYp߱bs&Rcm*&ɓ'8f,ZَG)2`ff$VS9c7~NgJj>tImsߝ̛-yV`zymzei;7+r Ηh5 [mH7JtkAi:Gv<}yJ cn[+;_T%Wmj(e8sPtV+p> J@[eZ$˧?}ۡRdŋZqM6hj-oZ MG&#=bD&Ј0oUBɷ"O8$~ "M_T`w6rvdq|)N˅]B/e荺"+լ(uY wٖhb tN{U3J*f5ܒ)ٙ}fC_(7YAgO[/ٓ . V"KZH?H ٷّ.~U],?V9YR L,: -VD9K]ʽDZUYwgӓnB"2$-7?:Y&g!gL_ f|$׫LjanT2 ŞQڞ&c@pNſfL$з*LiIpH-y]iBz5^|A  P֘F@3 װ/EOx[`3zt++xު8Bl lAz5 W v~Ie8SRt Gg^_-A|c Ja} Aծ݀zJ^3uh >9VZ-Ft#U=Pvl[yj 8gYj%b Qelxn @lt @^;]k#gA i/n0=щSZ a!<^h ժ! K昵NS#Q0GJvC"ZPG^)<ī%+󧭷3N{?{$o-8 nnizAPML2Ƃ+q`” œX+z*ZSvkPЍSa%)d+1tZC?ꓴ?N<9I) A@M̾LqJ3g $ <㨕DقİkGAVMwZrEbrk' <F$8'xs9--(S#{:!*@^IQ1'x\rɄ8\pI«9E//pXZ d{!`\|ϐNhXOD "a1٢Tqʳ'a!P|o/W%0.tU΋+ 5p#OhIz" ra~0<P?w]\`"ޠN %/>y}=/+}}6wgW,70|EjbRܖu_/@V/4xuRZ H <j ۧJ+ KU0Q%R:WbO٪+awхvx=,Ge )R;[VjbD4 h:_ߖKv[zט V)'#] KpDp}ȇRR>:1gqWc|!۝vSkϾ\]_GXK^8⽵?98ֺ#sZ›rsLli}{ulpwv{i8Յ7.*^_<+szE%&5$X=yC[Zcr#}xE`s p^qvUz}eVweNG$.^073^ *%3(j ܣ]&r1yjKyFWnF2SD>ڂv9mNBw_&zH BzC1uIc )$SDYEHVۨ}ի){nٞtq]bkJBF Nmjq}:[;Aac)yw?\VJq2B]vfx>Pvsmi=+J죟WTwd!'n96*u2Rn$ѻbb:hF\}VV}{MDօfTewD )(-Z`gSNKK᧾RcDL*!MCm^7xjSPs(UVްޖwݥftqYejF%ԈVc> Ue7( GW[sF ˌNxIE` Q.3(i3Fj*vzh4@F@KgrH(1 8šZEb1 JI, Q%&XHo s;SM`TƄGzIxM1poh!a- %c\*Ď#.Rd"ĽH PpcT?dHjLSpDWO27㣒ǤB{1>}sc~߇?_ML_>suɁ/|0_~l!+'=}~^`YDm+~ϱs/)¯_<,gq(Cpy>3[_εbT~v77(.٢„-^}Oar`r~%V18nߔiZpigǢ1`Va44X11ZE9J!)Ibj3;&{(~N|qa(bZrJbroQ!@*E -X RVr"4*KC:ʲQ*[9R WB(EtfK U9[V.h0[´gKAC)05X!ggwܔ  ӑOD+[K"Siobntp41eǧt`5Ϙ2}o?wkw4 tOOfp]G?A@dVvrjk2mQښLnkr}CLؒld>Gnda5>zYK72tHմܒ,\XMFqvY ,'j-D8~2JJg k!5p%ҕs귁 )^N{%sGdbB#}Fx)ҨSc]XȉhM)&?ny7w trĻ* ̻e4ջua!'nmqYFJ>e ?}H[ x3vzELhFݗ@ vWL8zzCD<2 9MN%8mت+R WT sm!kXdteu(-@(u= mC|9}OCI`mEfY]? Jn/sF~֤R2 P3,f.[" S &"V0"X%ܖ۽+ 'a*AR!cB)rFI >Ϛ` i&Y&"-ߠ`wBnePKnA6Z6Eya1 ;P[tT{cʮL:ab#-Z\ig[o k^QCŝT DH/ 8* aqD@ĆNkb(ͱOZ0=]Geׂ+!LowЂXx_*i3.Ƕxugb!^z #8 ;+ th \~w)'[*L˷ntl<&޼A< $k1i0v}ӧ]Lݮ%W?>sa%S!mE*UBuޫ&.WvҖ]SYK1b]6Ш Gs׵kstGǺMLqk 珡< &FKSVB~0Sv ÃГOޤy›=D!1o=1ZW`<ٴ^54+8Luj+dzD7>\RCT" Ra1&,1oFs=R k-I+J.u)tFP,`6AGp0{譥Ɯk+/1G ZG fY(!piXp1F5S*mX= KǥKC.} R|ef-;}n}ğng""Ql/I|{X)SK@y9!Mp%͊&VyƤ!W'\Cmjm)Lm %6/y @B(S›kp3{uܽ& [NS'U_N[2<,Aچ TՃyJH ⷚZUJ}UkB&)LI9VoO!9ElzsýdԛkB" #L9)8Wd$8vGJo1LDη`M "ch_~צ^!˜VجX!ӔL53( `$Yb}'b7*= uG4Ȧb[謖CTl}Etlqa1)Eޖ=d*I74 ɅX70ٲx^jj[?:K͘$0d&&o7$i̺̙$|F܋FeΞS2w`!'nY6UqNunTJ21H2g4nUcTցf 9@_y7W c%:'t&HIhwGM6H,HM)[&+f-cY2{᳾KFn$FVʹi7a~aʎY|-!"K۟~%z6xG[.UV~c,yUbHw(2EuV˯nyJn!~<rY:TSPVE[Em}PӰcء uI|~4R F`:(]\~#cO ǰC)c9e,Md# e"jۡ,:UH]-2\w7QDKŌ&4칐_lgL9W[I 'e,T>ޮ”LYZ?sGy=|k E-FAo/{wt  b v**u%Ǽ-DFх Yp, o<*U)\z6l w_0Ӱ.0pl>މ)M; 59 p@b2LΡ%,T F p N+=q.*r;Bvjvs9-0+e8 Ks.l+G Y3=sŃ0g>$|1z~22Ie&M(c7y[T!P6 #,E 9N|4W$LaR& Q)|n,A~!Cyٻ'b.KY]|( gl깘+=|‹/0BK5fC՗sKKY njXJ^>Œ9v)r-|0)gp F3qFfQ( )7O%@G#h[z+511"&SJu{6, js@"+f-.6MLo71P̜z;8e{n6 d |`J({\y`g<65wW}HI{>[x2]n6tՇOq~EY߹_ ~6M'RSzӷ{wWc%iϕo㦛*zuibE]¾u|4v8)п KKjVhW=Z` q]rQ|+4&}\glM]ˀq:[<x_i]+ҏEۣIUSf$nyL.O`0*]b 1)$)Ohïg΀/r|eCkFnwbȖ1ZMGGM0쏶5!kvuͩu% u^um ,{AO$݂!o\^_|ugʷ:ބB4&jZSUV@rkĜ_ &u*IU~ɗA&]cED1JȚEfDw^tĶOѳt8-AFτFSX̰,Z|yqR j`LPyإ)0cu¬ :M6pNbs>.rF6L04Ay$3vR%c6ݽ!'')gLgCt ΫNc{~K`\!!t F0$w߶3c$Z)ow?Dje_wyÈ<_&ׇo[h97*[w %[,t=w7Fzӻ{0`S"mԧTlJCʑe IJ61)rk$m\\n5|JTX&RFJSN񖦜rA)*dJ YerEQd9ѭAvr ;oAsESH!k,cVj,N\$ fRTl [0۬F>h!!w0ZgLi Nu-D]a/ ,;sMLA[i.i8WiJ!HՕ: P&H^y% qBPSw%o1ՔqQYVQ͞,2~n.`" 8]{@.'ŷl8Gf1>{?>^z{`y. !ո>>& ͒^HDޘރ_oXpdI 4V[t-+rBH=1 3NZ/9%Fs,8@ cRK=cK";AN Ԙlayn`3vX;Op=âV'~|:w @ OrsAP>_<.1KF!֗7eN5 U)V*&Uxe\Ilj1DX,S I-T$L]")3atZ5i<=~=@#HT&D"1#BLc"Lm&V)nb#b, i$׆oޠg`MyJY ACVVRA4-ͥ2DYw$ c Zƪʹ \UAb+VYĞ\#-aUe 7Bb.£/VT0ثNj ~>puM8FXY͎M:EQA2zwd8FLqiU2Z?ȰVSk] BZhrg&';T;l2tK׼xV'u!=m#;9 KN:$'HE6vM\8Gl$Fu ʘG$; OvBk!lC=MJ& (J0%2$?2ۣP;i?t)B sz+Q5ꅁ#$R`#B9lR'w_o_J֍O7&D29屭XN.=2=6~{PA#+b*hbYJx#c8$M62r[_6n}I bqIT* &ќ RrLkROq&pINnuU}q%,~ƛbYzAD5d[nalSv~>h1ke=MŨ! *F7Ai,Ĺ!-| @ I-{ qT4M.- e6fAxDyd2;lMF1Vє\bsi_Vd=ΌYcqCwY̗\_Q/Q/^A#I'S3$7}}77}}7yv+.cHsjDP?CA7 I64>} ݷЊ]5HB0oɐ}%K@>j} FJuy)?{3~seUpT1? ܪbwʹW'TD\Frӯk$x /BGoBs~~AZ7w :'ߙ'x%Ѭ48'wA-նz>vzQ9K6$&8wFt=`y7B& ] kN5? ŪaZ`ڥg 'gn\8#Ky~/}ϼy3&Ytߐ1x;.a}A7շ&U)B6 /d$1 qҀS ~S'TQFx7s;R|0v(?YL޵6r#"eqiY,)Yg,6؇E`;^[3d~e$e-51c[Y՝,.37k)Z%ރR+`һ:FʃFJ|"%&%27h}iAL1h|"\ 'S z]0Jz^U v"-S;rk/^B4f`DxGrmxK;qagn}:j@ˁ"5ם o9F>RBR}|<6u/^\]̿uI=azg'o>7gojܩ?Qvvke lTt}kuC?h«v~FvuV ]b4eկqW ꫵn`T_2Ҹ]&bPWKC/;HлߢMڙ 篍"7 L#YHJ9PMRRHs5,}*}hDc4XG;߿[[\\1@ `r:&l/Y/yGiߚt#I?g)G 6RcDc>;Z#C;̳Z1?tt=-TW7V)vн^Oy:5MK_Rdn+ެcUCIUG z>*=gmSh d4#c:z'X'tw=ښ9ܴulTί@"M,t)( z ߞney&:\nvq1"`0t21W/NOW.e۟'=k7jmzCj6-F(鏵ݻs͔l~[s.݌M:sKɧ~':(׿O>?_'kzMv\';Ɏ'I|9 rs.~W7JT sӛ>/3O/w.;\IڷVc+.}epןN 4?u eRl~ph m3Uݾ.t[TfhڇM6Zai0kl06]Etd=s_@:؇5nyBs3h-=2(ǁ3Vf{e~SDZv,z]ݞ+V~˚9XqkQ7fXAccƁ8p•Sw!|41"jic9w5*ĎOIn[tjw̍׬37j7uG*L N]ev*6(nK}xofbFQ84}O#h4vwT{y¹3*8&8ű16hi؃y/W? :MGBXIw j7!q{ZLn= vp EcI.̽؋F\rabA7]͚EOgDYs1!؊|!ƞB>$riMIg}dXdmǞa ٭ (Dd;8k:J^4H7hx欛5%6AQ  y! |yܭ1@U]^~tҚ9Q/خ~\T4jYEQ:JBR.0'\Ѯ=QWztH'y|}3:Hu'Vm= tϺNY3>ɾ6cPNIGGc7rkS~k4gmr'y"TKz:jMGn٦ͣG GsVӤp U׻ה;X3Q'vljqHede Ofٖ ifblG .`6Haf6@t@+3s]ɾW>uffжϕKC6eHc9r(UtXch ,)"9e"c9|<7ȱ0֬ 9e޶&9~x+Ԥ_ ܁ڙ87Jſi^ۑn{Bȳ{O*wYtrWCڪ`PxkGD )2C2+J%jSK_LqŅ"荨lE'Dh0G 8H ϩ8ڞjKʁގj&e5&ZL*E: %/mŒY% =vVЀU|zvU#VNdk,w$u {mno4lY|x2lo2A~sJ/蛖΢_=GlqՖW[zhKoGWM:$ h\%P03Z$)+dfzݟxd Iba_8RGAF9i-aٺy`mU) 2vMoz o۠ 3r5@8Sj8|4[V.)]ԴoCMphjhY)dY+;6PZ򫲙Mx;ZLb>gp:=A,Zn|lTڇ*-'WCpsǵ!qmq\r/6XɣIBo脽#d NdiKb'd &;}'ꕓ||, 8Z+hWy08dȰĨ9휂b] _ʍȚDՇ{W72w:7UO~bӞ'p(AlhDêF%k UMc \}Ff PpN{Xnc˜r8_9BsPm2V䤢'()e21$6Ӝ AA *gY,h{a Ds(QG`٨<߉;U>pus~? )`QzZ`;~saC[\KN9zO =B&uZY\FK/.Y+s9$s'4A 'F+Sݦ,x zn`ZW^> ]^` 8L5zV !h/ޔ=HrF1I [YK!J &E9aQ"aat9JYsmԈ.8\ :!++Ɏ#{ 0*BֲKЁMͳ%F@;Z@YV: : vѵTcIٽZqAz"kI!;ռk։@mbءy H^ {I;w٫%R?DDЍlBl{G+ݸk"u!c y(*f 2B}V9\tLNQ(-GFX y+x.g'Pg_ ';عYr{a}T!̄l-9@dBa ?ȰkSLBv"ڝ[ӳhv^o/9"\D=P̙rARvl*Fy!A2rMPVĺ ͨԃy7>s]{îA 9p,7nf5r"8I,0KT)ZIb1Mlk9QG7Dtkek.ִ/AsЭ,Y \}V5;,+p{Fe2rڮٵ:~RI,]$,^˰)@ڧ x--mYޛTž}d2׃| I",BRq& T(8(#kH-'ȼ6RͲx͖Rz=V56;yzZ駳9??o&f9ro/8qY0&a-͏Hx8;r&^|:krFR[[).N2$b0N(Vrm%R`\:TRσ+є ʅ]ǔ7,mxl LQRl TlKtuSͣ1r3G0J'G[!fѶF4g #͢-%1IJFS-U%D_#8HRj@5O_zΏI mAi_ pqsts"G7`vɽϥN=rU;%$s<幘te:s'f>\#uʉqʉ>Ƿ?\,x)7f9Ts'BĞh Y,vx܂H[Dtbz/ߎF@l|ax{y5Z,؆od,R#} y aZyI¤=&a,w] ӧWxXiaG=d.V\n1E.)}'#h ,f1y$<<: )t:&Ȗ3>!9 Fc?}`ʅf>IqrXaQx 'Z0uvEВNqBpw'c{vGum7*{jBDN-IM9$ `R|,I ).J?}s o ˭Bq߼e)ޭç}Uۋ] 2Џ33@8f Z#X8.d.VzǏa60¾RI`YtԏGO+&*5(g[H M*6C\sZ?؂ Fڻ ѿ`Z@قjra8D\^,%Bs*I[+~ X>W<[EO,2S7MwYJ ܼTetдBтQ" /J|}E@[VRvWt(_qz;'F;bJM'J * ,5ۖJR7yXf# ~퍟#g~KWj PB,re5BEI-jXa0~F߲ʈDV"}׸Y#jf?l?d RA,0N7Na~z0QG\2?ѺwqwrXwB $oB?RbEKS:- g1$7sR?dŠIPWMPcdw fpd D\Q;W1"?&KY @~L,jbD M'xbQb`~REiuclzjK˹lR^(DAlj|YjsXSh8`Vh : k8)..b(l]0]\ 8b'\ǀeNipIiҴxٲMlY }j8i" -{_&RD#J'R]P8вR|lgV zF%2R­ JsA)8Dup %R\fJ eFSN]FrGs+W&|7ՋN?pxn"T 3. [pBʠ )YP:]TȡMwɡNLiԿ:ʵw~)t-tj'$(DFPGn#k#(l4\#sWg.ϧc:)uݱZMYI0K;Gs1f@1kz&`rRˤ3VTS4jqsq}v'sDkZDP ۮ;BZsjCC- zc?5mot%t41em5?bRi}lm]w > V&B*~&@ča/BpWM8ƈt%9+c[Oy:Q,Y_'64x*-h&J"ŏerAPs` &DYʂ(dv|ǖƮ#Y);it'LSKI`RC@v TL)k!%V{\1 H1*vU(R /_DLV;>iŕ6v5\tma$c $/P"PpMik_yZF1~% 9̔>N h*%1B1H )FcB8rxW{Bׅj=lMM)R\8FpX@ HUPVE>l9)K61RGf'D><Ìx"sP&;)XUVn 9 K9wK<`nFϧ+-,0G(f?)"Xr\.bEei$c+$d{h Y(6V#G 8+;)F~U+njqn2 f`xa7k6s0U|&T |i@2Wah: IQjiw8\,Åx3 &`<<t\ }V7f;3y9/[HrﳌtN̗Oy驗OyBKFfid<*3]2_g%xH?C80t~5 ʙk4kA0yF Ζθ53H C;h\;\k5(S+TyX5۶r|<w™!ߔcXl{} @,L %:G1N(N5AƣcHg8^JyO|%!f>\S֡v=(ފ6F?f7h/v?&%X+-儀(rX-y&vBΪHiO&xJW*F Dci8TsssB8GzP&pw!5w.(E3OsV%̙|3r ~-fB,==rEQ _I1-3dnjH/JA8w%ιu\b a.CPz;ݜ]7t֖Xa18h0Dkmrკ}>el<;pI8Ē_Y '\+dkG=3, ˔+1CJDsZ`lsI:a# 1)J8Ն8؋#g]q` krp'ⲕuR#jХ4[RIehKYuv5<, ԑ'MP ?k6s8}mm74wZMVشڀd2zQ_HgZ_F2ŤۂǫX5#xPReM"|@:iУ;_5qZqw>q9bqvRh,e-:zƳĔXS߻ЃA}ퟲqzomA 06tv<~ISVd>b6~Șk٫{`> lYk0rQA|,SZ0G/jj#\XtJ+YV:%-N-x^VsJU.W=諣R)U~_ojCr)LeyC-%zք"uJIc A){NJD B4Q:!6UY?Ԕ!ogwPcV)8A ݣ=pWuflhOq{!G+I=g w}W <_>6n= /RN[[{ݶ4crzdcrjCByF{{3W\o@)%Ƥ(/3S7R _\L?߬H9TLScxOۅ*vJ8_VՇb3ԁP=3<]4L4k}74mmS6JK})Kg(,f0 (iWBVʩPixOTʔ}"zfQٰa+E@ 5Vc "+F7:1wAmua;J\z7و484>60jj@K8JmzC+'/Ί]xIt]FyɄYdyBB2&ƹ`+F ?S:"9Ϩk_ +z˩Vq={ Ce -D-9xR'6gʫwϔ`ٮ}e6 OaN~Ml:&B׏]}0(2/~0\ :OaSȷ)Z=nqbF@=F^rٞGys ;Y _Qu%"КV&DE! %As#~Oy '!hOxcU+T?9tu6N7vzVvEv௵^4֙U!˃;.)2Q\Iu t ׼b̑8J k8©=ԋJ!|}`7Gci< Jr`0h!&Xc= fz o?XkFesw-!L%g!P X$z:rc[!}b TWYC0bcc!.L\YTbՀNQ$B{D5Nh]F`T!WΉd)5W;r#fE-wu݂98T<MÚ**鑯hLH_\SB88.[UIWO6=5oc*lk[_uft5t}ӹvćL!%G:' E&|ky7[ `=|9+ 0 xjwI|s>^:y)Ђfۋ?mmg}S\t[o@YG鵡P`ƣӨӚ¼{ OBeOamc}e[0*1'lFGbhqm {$ہ=]|%fF O<h #߹[,9@V{p ;upOaViSHՈ:SUxi*HϋHϋzsSEIkm>͜^IW7OQ/ M.5+:.^K+*` =g"0+kܢHjRp΂\ r3P)u)sNU{ק²g1ڑ1>Y22tQB/]'>w^4_uwUSU 9@@!-)X҄Xٻ_>it&ĄrU*S[hҕcc?R={S.O>i^|_dwLٜ љV}xq!8ڎӶbtz_۱yo34b*r>; 9AP 'XJ;ey,X FIC3-Fc0u"0EXV,`%hһEZZ)yme#N_?QDQT"wS,,:[d*#XP¶KCFaz̧^0cF AYGJl¸%rx-!^ az$Pnݲ h<740ɜ8N1L q]@p]]7L (6ô` 8x˴D AhD hc"'Ou )>En #pWQ\??FFd׏ ^;gKeF0g,Z/V'ɵcMkd5O{irts(ϯC禧O7GWO/>r€#CT:`v0,}NC[_<DɧSLwlqL6lfKiSD Wfnn~y bДR3Ͷ!uJYrp1ض9gDˆŶ, ֣/$+ Q9R2M G,]vM gоx *Ԕ(U9H^ۓ`|8ݫq`YH4:.gwͿT g'L)ٛzf17o_&fܿDy oT_R> =(R%{;Tfw <- ; IZϹl K+[劘`R:.Ȱ W U(v`NPEzO5 9')m1 qz$J"$+D3C`q /° 8%3ZR:KMӜ1MU(,`3JАE DN adžbD+dԒJ5p=]HdqN !i4:)ٸg;ρxy!e*Ϣg g }wQ{3*>]jo%!{{wq0豳_Es2fSj?pwdJ/:VFE HԑSAYq6RO.Q~z㘾Z/S/8* U=Pw#.# J:Bt)+]Y ѣTouU("z-Ϥlzw7er;1hVPUTy_ꊃqGT톽nT8IPOi؊E%,P4>-)8%{]:eAmD|BY1;O2d2=z}(^>z;<*b^c@BmײٞI*ざN#@{TLz"B\yX<7FR=z.vAQ7ƐPWheѭ%2ȹ[ Gx(DQfS@֩;./rn / 5Ewҫ߫N1%~&:6(*`&/M^$tS SWT# 8|V+ 2x@aoB|D@(x% Iޢξ_.o6dgY\j"q[~h[o9[1_];3Ec_ c~^,%gj(~bR^| ?Tǐ3 ɌTJ%jZmw>5_/cJ0PdeK-;(qƞ_ǞF߿={r-d_3?t!ι*-b̐H+aA4'zlS%p7o`*꽠 'wTw1bKW!owz3F 9#[Qb*8g=VݾqY.rZY-;*9W7<[suŹEuŹU{}zs/->^r-`#c륛ez}lI+r_^;hx I諺uI[?ڬ?^&Lq1hjY2o"5L_ 0`!8R\]4WgU"B#K)9,uvY`J&PH0LEa}n$aGOwXΜPD TsvJ 6@lV7mP;٦14Tm6 :GJJ6!p=+a㭅酊By#U0d(Xx=aF hUm EAx *i)uZ9G1g!,6< rш0BUE9>f秛$H`Yt|I,uWO*.5Wo\K%(V^.@>| @ѽ^oTmOw|AL"Nqb(~x^n t[rE$ߞ|StlUq^({=_y=m ٜݼ!$5?>?<{_)]*S`ptfW>QkJI:|į-FWKw8F^zmB*[ԐYK(w fQŹÛ?f Yߤ~:M7Վ^·1(+<Ҍ8F"(qJϮA5}xn]9#xhŻ25oV|9 ׉s 'Ip_ۓ^|gWwj4²U_x!N%U A\y} pi(Q;?-OFѹH !U{B«su>ɮ9bpƶȩ6]~їk,[S?Z)PHxvxfU#Q)h瘫uk-ŔLRuamY7yI5ZPsuL_@!W7{դGQI KFWI8곲ekoӚۄQ鱷]*7xQe`QWZ]6#>@у:/D_Vٮ^_uU% 3ٽREtm  !y}e 8o~#Nya#H{ԫ`51P*vIN⠌Aa #6$a'߆PTٿXx\}9RRCT0J (vX(sbs'b$Ht`e Uޝ^(6\6]^.S=Kf \pB8x-C$ HV⠮, fX"TJc=u;)4!" MBr.#JsXZӠ3;Rjb9hFRD+*-Hn_뮛SؘJz}ڿM|,dT"o?Mh9=hCڔetpcQ^ҎrNJynX^nsF,8A-NaɺKY ]F#)N@)/50|OCZAbPQ"Xg;WB?j*gSfJg1Gcەh/xeS`'L'8}(N?V~cì_U Gf sBɤ9,Rv /D6'˰V˺?^Vŧ Q *> }mc%BEO%act*>]Gg mF[2Qy"4ZQȮq (Dt#6^R)aCJ)qmZ(dXIWsSpx_egҕNg5NEה:#PLt 4|!C÷=FJ\-=M>FbָCP*3@qRFX0)8u&)X X*qRޒ-îRϵHqm+hTS&DxodѠ\hb$%Rq鴧A/ʩ^ .g4!!qL` ɐ x (MVKTm)ot_۩(ԙrkvB*"SoM#!oAPkj WT=>&WT&&coRN a_6fqVIo+|Л]I|@lLE_/P.RAۋdD |oyLu'ޒYSKCtn*)O 1z Z<9'%GY)QB@UE7Dٳ8)8%UI'DgO{qTaQY>2,fGqG;+qJHcگ9tT(:R;Q%z =k)bEF6tݿT.+B!$3icKg-J"2{3F 9C^?FVŠ3rZ I/Ygh|i Y Pw}L޹mF[wr~e w}nJ(ݶgs*]w`nU0N {T@7q 핃x{Ho]/cVY/];h:*/^{97 ww~`:,ޜ~w Sp# =P 'xnRa(Vb*GQ8)Fʁ>GctCٽ (9\IBY'V:EB'Y=шƌ7ZOVJ8G Bi"RPT x8kbBi£2tC5-Q${=B#{u`sBi}$Ub(TH/>Z?*fiZ?~!-4,]u6`ATQkaj+Goy I}K-+Yt4wiϖނD f'>>߻H"6/EBUTWhAߤ8dӯb4_&LV'y%ޙI_4^wg7iHob έD8m$+,T"p70ͦbd(&)* EVX]U]s|Z ~r5=TmԖ_<}3uS.կOO*K&,t6ŝܗh:̷Ϗy΃upu,5P7a"nr;AU$X e0M_sEb<\.1oLNLX#-n,RyV @nu PeDs[H|_ `e ìcEiq7%T#^1 xSWh-5gaޙB48=r0rQj-J%S' JҔYRrB2 FMYQu(x1Z\XWtPA%jIWПXPJޱqgK`* -IE!t-8#D 7\lvuNvcD#[5;ݳ+WJԅO.-|Ջ\~eD1xeVaǺEtӪTh@c_V 9nԛRĠ6uhGո,Z$:g[PMn!A_߿.;6#DVZ@rUd|n0-y/VhďԊw%9RZC+_i ikZLx;^osTtB'mPI8hkն_>=UdQ}#' f->>{}wvB hVw_~plܰ??T"tv5x_g?,s bsLJ>AۂJШS=>PjBS*%ps2G>wocMG:tCXh'[EVdRo!,Ea8 7G8AڼAQ?WKh!d2yϛAQ)h[!{Riz-~_#BF,;>̊;usìGow7*e=+_ e^j) b5z9PO!} jC,z*Be+ Y G @[%,E :-I | 4W TAAM@CAn*w$jӱV;Z;[zTvBB^&T-tvlMS1gn OpizAvBB^/SJZ+S4AF OAݵt^xuFw¹+[Т+d:Mղu~_,j9wimye\nBk}jMR+zҟ_EzR_~BfW7Uk KV$7GLaVCbw7$8sZPߜfIͱ1IdEGƬ%NuZb@OL>=ZHUx Bib[A-JxĬS+@YW` {Rr2häu!3|C耵p 9r[vgNձ},1LR+C\4*"Z]w[ǠԠ3NLAM +Zp7tשI fU'A2ɇCz=w O~3FNـ"gCh3Z5(ܛ[OqApiySo.a@GKMmF 8cC=k"UˎI!gS^7CiLtw -%ae/|f_@]&o*Ԝ~Wo&#Fї#eQCm/w͵o"t_{ޫ4]b ͭXֱxavk1w7+wbt uI:mQcB[Z6dm" lNMYז"N i&i|E]9'Bo"EvQ"1EB`[%6%HR}i5ㅜ0S+qZe;2 ZŃ h!N 7c1ӱzp6՘e!z6wnñmAqqvCv,`x΃-Kہkۺ]uuNpzD"Ǖ !9o,@*WD >L}y/n}1o _y#+DD] ˭v0 [Ur%QD okJZjV"ͯe9G>w˳} 0TsuN>M1~6%rg ~ԭAzty].n3UFx4/-)EQ%SLBS !\\Te+=0T~1M"'z?YN XNb_ WE`5S54:6IIQ;iJ6,F缔iy=iU)R;[ 8SliJgW y_jCB DmN }n:]ckNvWgK@#劫YFwQ;Wu$-\2jjgR2Cma0 [Z3N[A-p`HulA-qw^MZ4x}=p|2>0n߸zR<,B/2U"_>=~VP8{#' f-> vᣟшϯ2CANƽ͗a:E?%YkU4]vҏ77x} 0w2}jp*KnFv 8!F-ᇱҡ}i}1;)n4^\IĢ PPgВLq86@鉰e\Zm_\A@hYpڔLN8C㌀EhT|Hj)3eGVIIx;E~%ύ/ h C> CJVj&cJ#A`J$HfF M zHPNAnIx>,rުO8mOHC7]h.gknE$|tsM]RRG~Bl|ӒؕV== @sҥV_炛EM Wdz~(O.3|ՋFS2|}=mݻ0bau1HxgD**-˒zIK\P|1 N[Jo2h>oYNѴ&}yT;Q0f':<sa$a3ɑΰ0r*JMbʜesF2T>EfPp̶<)Om>iל,Nf\, LD"4ޱ7Mt:ik<mD4/cC:X}G*"l1k1]ri^!"+<ϔaYl3VRY5Vx;+S1A1 QoƆ6;^ tE_Ϳf}[b'߸EVp (z?աIO+vQ%Aȱ;|.NPgpon.xE8q-(ODO d+ yOs"nlFq`a;bH p %}c3cA ֜9 xU6泧G/h̬|@<6g!̒vPU3`r,izd zo46s1ڰp/_t<ߠ΀V};-#9-Dz˨V:=?3c4F[S+h2#xgWHߟr*Xbvw=$]u܂P FwMGҌ:cU\9*|A RR]Yo#G+_vq>[67|̼lGIRb񒲘UdQ"vReA26ɢ[W!~>zq_ԧ=)8Thqy%apz(1}_g?ыW!\H?_EG;:,zn);X+<0#6^jk(Y\T+Qa 6 l]vGVdPX"a"1F`,rB0;AYj8o}8C%PNv,: ^y4RT験R[̜P )EcKO\ R`@IWhʄU>q#P?)BVjYAF9kI ',BnƏ.ؓg~axßP}_?ŧK"8XTbS>BU ޢwErڂn^~! P$b4j瘎 1"bAW { TNv gO0ׄv'G|P(dlwݶ)wZ}Bkʴ֮lZTV?"JxwM3~":n0z־^L}DQվDC$suH D0 Ji %_\h0/.\oy}4aՆ1^89J(++++yOc+_W$?S`D(l!!vLYӖFBX;!#yz\u_M/w;0D!%;X9Q[pF!$?]%4k3ԻX  QJ-6ŞbXkpApRj |560X-{ErJ &| )g2x7ZZ _ <T=%*8š'#!xm$-X%P\!4s$: hd 4 dzMȭ(b-4@ݞ<O[,^J(ưHx&jR 3zD$qL_yJK &4hȅVD0DP0TG"ɥH.EݥݍnCR#ah4ցjQE8%qEzh,A D5÷t,wahx2`|j6_FxY㙚:Ӡa- i[ ~KzI7/=Tg_:'G$&jg1y5(VΠm~ bKa{ p 0?Uyyq_-oO{{ KՌJDQ.OpCu}Ĕ0]iG)noy\w-cn[V/'?sa4- 1ܔ=s?C6׼:|wsE) 緷7n6|8euB)⑑⁔G(j2-BH {,5aK|*BaFZŰ42\!ECyBVqCTĐҍS,7^{lP# P9r}gd<0 mj8}6L1w00φ%$z'iQϬuQxBkܨ rxs; $NҔE(QqKfQHP*P]&U39Y (>&3NUcVEVzut+ǨX#N:r̍&zss`N&e1*JPS-&%XcU327bс/ @eP#սYOR]Du۵TD@uj=k(s^oͿo`~DBr| °l"9Gq;%KDp^\dw{1M5ܞjdq姭1@Zyt$QW{ǝ҈PmJߵ(9^[h-KKoŁgk}˾[$(fw׽n$ڛ߽E\p}}2| S[;Yȯ'Wb3^5kGoipM}8|jAk9 ?'%R=_{xRB N S}g7-F| q;&>\Kc2v4YTzS OEY9MlIB޸C&I-GVA蔎D/#"thdڭrvkCB޸T-vӘkJ11htv1kV=S!!o\DdeVڭ)vc dvkV~!S!!o\D)"oЕ!k:,Nj9 ӮK5晴OHJޛf߯-KX=+M cvϣ~'TְBT~UV0X_-V%o䣱1r}_Gx6CX>_&o.\Ly|? mt*t`yp{rs} 5hx_ }1wFlw>MVw &O-VvK@IۦW}2I:AM[5OVGHˀoOEqԁZB\5CKӢ3Yu;ЖF1f$mBtp1VV2C&Eʗאַ}O!:}.@\RSFPb G LqMahcNs7Ndgm;Zb%zYt&QD$MP^WNӜ룥q,di\$Kni{aR#A2:@GQӀ@ܔ">"LaHBdkSZ:PQ-'(5IV;Ycc-jl]兔lӯ+M,A |}[,^*wX rX:e xnMT ;%j.q;jr+=Aΐ=Ý>b_L7iFjwWٽOڐm6$YR&U8m-MvȎ5b!U"`VMvɴv5X^X@Ԭ@CLxvjwv {h9AD[vQk1ET&B)+ Nf`;8UVrZgaJ'bLqHd@\r{0{ڄt qB bőTcƹ^G"R/-D(7\FOQ RZd2'GKyeKadqSum+KlXjA؂m cge|VhR~7#SYL_ӆT1Ƥ@^ %HGdUW$ d ##F9Eʹ yHh hkF"hBKd1AvC]ITspD]x "[u{"+EM;"; .J)2t}#: ^R gJGV++BSGr9FќѨb iT$QY-8I  x!)!;')PiEB*xam7qnXC>F(BK;Δ%WQ<_X!(ToşpP\d5KN9# %ࡻ{N D>7U}79 ro3xNݛuvRRR&;\Fx,G- ѨG ̑ۍ gSLNҙ%*DP w#=?6Tlё!:+>|(]^vP嫷Tqe{} }tLуZP Q}݇ZmRK|hJڍƨ %'YH8<9 $ ѪdV5PdnFzGjpFtJ:UxĜ ?ڨ ! ckGE6uspV0MMD~AS#TL @ G6R4|ė_ݼo>,Vtfۤբ Wy28\dQ{ӕ*lf`PD- "J`&<DGbQ%ΝI]GEbN-!StMLrCDŤ4=tIj4{9~>=d{8i).uT?j3Ujݷ.Ρi]LH㦿C Nؠ<`A8bE쒅:@5abjci;'^)2; 3Z)dQ&}|W7^7_S愴+e[wwM7%|RTVJQ)E@60+L00V嘼9 22z%~ԇF\f eߘ<wr)Ef͠s uY>͸$\ÍB5i0`ch:ۂd~Cљ,`{RђRﯗM#CB Sۡ|qDO{"̟f7ؚzTa7$@_:5QAP>Ip;ΩfRǛ+ťO?XR,.-%PPqGs28޷4};T9U;co SP3#ރ϶j''ŧ2.3{' fz8~ycm={M5R2^/ s PcF^{؊_R#FB_}{Hؿ: 7 e\ro"ixr1ZF AF{ +]X`nT:F{`dΎBлsCK1Ws{NAlHZ C֔\es!~KkS=[:/qGAO]Lvc."Wfp\`yvyͰm3ߓLj'E_/:!hȊ?]dzTX+8:1kD1PF+OE'Cypj?*B )ܯ;id&īv\. nW1գjn=eY|-Sķ + O^o^\ -w_>zV2+"^G`Ӈ\^=Sݛ6Ka/Ki%# x-umZ hg& :)f-TG  II#L8~֒*,b[ē(IF[xÝ>)xBtr2or,$A@Io?È2a\ /Q$REB c#"SO[ELV^0SgHOu SS|{g(j]`J='ɻ>,5@`#QGVqB?Zl{[~=kĀ^$OO5`# 04*kڋ!BjD]e;1kȥ2`[彶!GHGS!ݎ"-V9ZWY? (ꞻ (yzCUCQek)zI瓓l◈^RR:̀B"#;NN=sFnE§JX/Y?s=|bjQΌײyg@<}R`Tp$(hlm}TPգaZ29tDI|Eq4:x$PZO01hPEÔ>+>) ^'u@EbQ%ǭwQ֞ W_/uR38Gonҕ_9^Z@84#*!{ Ԋh+DB:Iƴ ΰXz ؁jP EIB NOȑ0P̨ ROˍ@mfPT)=TѮ%jp*C>;C9Jf,DZH 1\N,Dg{j U*O!Ox2Ggν{d\j _G[RI-Sz`[7o$jCXӎ7 _7tuy%޽OiYvε]j;\-Op{]0O^||DӒ!`}8TIf概_4zT~9bv_(F%v0 %yJs JCܲ 8PBi*I PHw+A2`Z~!x\wߠH)p\&dt`_w$ɔTt _@">I (fGwͩHBuWx)OLn*羇Mpi~vw<$ޔg; !$ D즡|M^n4y/L%ᓥ7<wr.gR4Ybc7+1EƝ5F/ç{3|戴=ܯvtx=艷;wmmP*=lT$zjl&^+48"9"f03Zr,Kg_7Ðq֢S'΢C+ ]W[4bͺ"qUϕZ<8ڢ֣WjkӢwmm^ݿt 6£ @W{~1*QWmZklh*k "xPuӲзT\OX7ET0V5W@ PNҲSD4NFE`=(3_4{AB vyhKkۛtv)WܞR p9ٳ亱zHvBCAo Tio:&WIṀ@RakꐦD)"]>(1f.O-ťj 'k,5u3Ϥ5Z@\f%uҤ='b1v&FU1X~+vd⥫uah},p.IiW)`V"~}Y|Їͷ?8x駛/E$iGWR7E/46{K3Tт*h*,^nP3U(Up~Q"1"ꋾĐf=)+3!Ti~-krGc oozQ=;G\'b@VX Rj'#fl`9pϜ]o(P~ I>ԑҳ>PYGne47axn|@J$IX҇))R仁Z+MDKS09,ՎԪtal0QR&Y Tqj@qS8(t9n@Fϗ -,|o|Em]9ocC[ta_`2Ad*ܫuC􎨧}!G]0xJo:?^ Fc!/DlO1(*Gѻbc:ݎCtZJv "[M4ɦJg݈DCnĘN;x΍wE"[M4ɦJu=0bT wZ1GBBLŮL[nm Lsȓ|ws8_)JXךd7Ns/¸|CЌ(_À#^Iw! ͍1b)'Llˆd`96>QNV b[Pt;_79Q%vQZxQ.z\ 궢C1T "?F +`Ci`'HDAp{ xM8M8LíkSITwSgM 4fjct_k+n'.wԕtj{]>ZK+HDQZ[~BXTyhs;T*vP}'AƔʖA~0QZmů)!i52x1X!~]PT-9=jA@niwT6PT =gp;a ۃ4Ǹ#%rHk߽U_># w rIOLum,K}MkѨ}x3Ds3bպw!h1Ő\+)+*)6,6"`Si-0k إ#\QweQdN5&MN19x"՝CD/G =vK$9}vT犑LaS~EhH4s'w*Wm kFӛ?,6<%u QNtIcV E:Ť0\Q z&kx9\eƉL ㇥XVzBzED\joG GNÚdig/?ew糩9_<8 nn*O646][_HU ۨ}`XM^Si;`-Bk|,9I(?r7WR'肁pϰ& O3y CO nBsg~m1oػe۶NuQ "g;老,^&/TRSbT)1WM< XbUg"(̽ Z6C/ɭ)4Đy50AXF=PR XG~Ζ:UhTk b-fSX:YVf?u`oޮ~'V(eL8 q! "c`eP[dt @bt ~'9Q.Fk@?e 5A/ols"dKaKC95#=퍒7Q蝍>m%-w A\Pg~h)2$̹A ܭsO2gt-=# ҿFV]o^K"pXnvqWal&}ˬkUa%c.aD0M7r`x##<۟?mm2ir<3ȴUN0cx> EȽ*V]%E 2\^D-,m/(N@X;`0pJƌϑa%,E30*P !򒴏z',s"1D ;\, -iգy,j{r7`lNdD88Oէ *pF(v0mi 7-Fj& `RzΜ2udbDÞ~8otV6pHyd'~.swM( Ip#`5l8cp`ŒI! -ɨʌ0cba2<C0k-h BGD'V;oU` 2&D/`~&w 9d hS4IhId+.$;GO (p Aߎ3Dθ FƬZbESCK{;OusZ SwݐP JaW*Az(E 9 0fu+H)ktw0)(lK iPvuFq*֞&mPA*^%#`OdO;̳nw3u|/ w r(>g\,_M Z(`Q07e I7"D# &tI $]v|-^:iKD6ֽ?ID/+yV 'TzX*lKSiG%3/.t:_AbTSV0# \0$i9;c3%ʔ\a2'8 D JÒSgVg3Zzh :bvDo6J4ZrDsZzaleHadNzSKp€Ir*@lANDUP=uef Jt޺ =eFH^jE wW{bo3\\j8mҒj["h1*[F#;\mA-)fS=x`c䑃!I-4I5ђ꿰G-UD%+fוLA7J#[DI}P}&|71u^UC20]p{r*…hyQ+I[nM]E\8O&lYd%l,M6,~i"kp~1-IoH5&MZ.!Lk,ZsrW)QOOՄY&JS!o4]:j6|""6˓cO8)L&:rrKʬzچbF-AjBiI0!zG={0#i?0t}A{ɴʛwq Ldz Ç`xks9( TcJLO S^xI#*(҃_a_mK H aPԒeBpGNXseԺ/";s.wvuyN8廐Qna{;壛b91a|MC+KXI:tݢeX&k:%%@(u2($Uqia[IOM釷V%UK[n)6E|kǻ[*1:9L9ϻ_ڰnmJsA׹` pu@+zyu)B\3-)Cs+_&@t5Vj*y['rx~e,]ygCl8Z[~τR݃b8~unArL[F5 Z!2ԯRѺ} p) y|@j"n_Պq۶,>k_DvGb!ᣕrrrxi9c$m-5$2ܔ%j,-vV\6RoG놣@W_n6[~ݝϦ|("ҬU"?'y% -Ƶ\lskUb$1SŽ`VHDžCR3QJ]FxՔS*"g,SӢ Q%0$68~j ["B)LbO 0g!4eebFb,s1aa&"00F=01X\ ocS1,7Bdf;Ir9,+N+``EG-ڕuAKc$SHCAM 8 .y2o|a l( DNEr{JBK"-XG'|ZOb1^Ի?]s9>9XFoyzy[hߛpPu{[ժ/>tI AF<\}`Q< `~e "&Lʋo}xw>ͦ7`G]˟e5lgkxA}"ͧG_.D3)L1-}qC`/^;0!TkkrE/ʸ_\)ۓMe'@D-utq_RKl@VQq{(3Ę%R[T68T"8b⼝ւ<>ʄp+% KN&/5CA{x A*/"^ZGȸ`X` A8-dOsaBF`ok=|VL\Q}2a nM&/UI< !RGH^-ɽUzR.PL9jG-6pO$lTNl[y;8on>=zX~̃W#xJ!ZάHǥ[|_,_zA>tZpN^=%tϷs~p/{7M cgoNB4#tvi0 ;=Z`]!!⛲ .^KgUgTăOQyHbκk22:DcZ,K:+K};ю H쬽/qT |$&(cZNƿUt=SHF; qaﵷ8iv,4QV)sXPAyW_ z<[Z A>C)F|8BDb 7BǒUaR]8z(H*0|@qO)砑!bSDD0ДjLqNOƤRğfT ct3̊[$Q!<( uo]4Ġ&)TC/;[DIʤ @lUf(T hKSJ`f3q kժ%MX;˟^=o<᪾dꀾAxzŅ*sd78>q4P8 GYaF!AnJ0:MCB"hR jb4ɔ`*OB213D2!ޔgj;QPvhbv#!,|[Ո:܋I&y_ |BAvBmbe4&g i αlNiϸlNni _6~ ļpNGBM@AЄpݑ)(Sx&/(y)t UP~|:D&ưx?A#"t"'W=#ou; W?.oDݭ[6:-q5S K͛B w+MLnEka{`Y; ~j>Mdq3RZ`R(omHډݜX!f:Zo[(q]Ba5H W -=HZj eѳVd~5AV_]_9GԻɖЛWH/#&@6We4x\N_t b'z1L0p=f-L)%.p:"Dk`Kc#j9`b%bH "4';e_l:v,h)6hh )3(L iC,JX >ɛސXBr|xҵ;\gw/UF(&I,I,N"СT=RKҌw ;*\ꢑ- r;N |-~>sě1.=ZUpo!Лb9b|";mľr,m/INZ]=}zdU 7_%w \} qV/%R@h~y2BP%V8š~u/M5%W[l(}Eِ E@]ܯGjeCri#Ln|'z*zכb(Lj5ݿ>, P+R='OQR8ɟ߹5uɪ޺޿/>5+8v7Izs5ϭ}"`,c;:e8'T%ŽM]o 105|qAİ=Z\O6խnMumۢMU\\sg4G@e(4)4BS_IX"*xqb?YttTbۅ&WST*#Deȩ#H7!ĸd)Hr͔ȦA7f{NF \22$;Å e0CTe88% oB)AFt ^(#ci1aR$VZDĝʛhov\yx k-P:9^35cW8qN7 ګ $PJyz#0a9(子0 xmXRTvO͓Iͱ4O^߭wGf/2Hdm{<^Sa*@M&@L/h%KUg"8 %AMׄ㪀ۍ~>(pm)ZdɝlN։HnQ_w3 3*gnGYM,QȄL`׾gX<wf|T@H1FZP#p2 3Hj 4ʰ,#YS7|ȧ(ddw* T6R 9$3`5Bĩ Y*EJȠgi$Ù T Lls =QwyRYJpќj4V15Mn$vRZ`-HSA iX#!/kНA%癕y8цHfD5)RTqCS a3@iT)WIqs0Q<.-ٶp[;R|9^.˜.KÇ#kjW | $\^Qʴ:#F)N%@*׏*EpI [L!L9# ZoXFCO2VbA͢؊>Ɖ)mIT:Q \oP4kg<c~c,rR@-_QPA5n fJP.a~Hye 4-ʻwLfu Ag :ZL;1aܹܩW $V2Z-nT3ghѽ`s'0ƕ9ܷ4iJLŠ(J,p#͹Ba9`-QN3_UG\xF1qgvIs>)}qs^B@*5D!u,-W\NjuT1j/':cAc@6\>dݟ  + vlxT3u ˱PAەR_S](PYV~3;{rXiӏ$#B!)2!ǿDA i W}puܹݼϘtuG{%_øQt_kk nuCp _, { -Er ~oMK86)mH(/1ӣ+l@7hk M&c>wuz÷#/ݓQB{4ṱvιˆKє]'cWgU9^|^;ew}S B'_ P y r ?z.'ޠf9zWwβ1i5f9Ĩ(ܒCb5k{$t?aJLUG;` CHvZ }W"rqDz*8\Ds1D~SvսJـwqך `ۏۯ,.GG Tˁ %MS)m 穢:jDRFyʥeZ0bDBU7n֫r>;}~ᏡZpAҩ/r"aܧS[cӨ^SM(Jn<YL8SAֳ>:##Z ÷"LȆ:],HҏBNO 5-5>؇m;So?:9&y2hǤj$Q}-rSR-3 e&6gI;%%̳hO9?skj#!r|F$+gcק9W?盻>Equ/p)?d`dKͫ"OTb.Y;T_#(e6ة)!aO`Q- xz3' t,I9ۊ+v6{s3{;x3~ k.f?5|yݻj, v~B-}4_TTm7XV:?B]J^X&ny|>?Y7]}䰒u9:ٲrڐpBʾ|i7 ]Ψb11h3fK -nW[E4J } nsFAĎQGK)i@քp)ZysAk h77*v DtbǨ:w{<ⶵ[|Y@քp=]ĂvCaf1$@Hy"F@"LpJ'וt<Ϯ 8;OB@/p/ |JowNhwڥѶ"~b:H Յ?ΊGU Bg3c /)O!AWt \н;PcҖx@j+lV{xEj` wmHr8&Y| p`^"][Hr%%Yj[ؙcQ꧊"Y|3}[lyW2ht[4LĸeT-!~mah *J;E 5hS]i$p5FbɔO[ɩתUԲ seR'[0{KA9QX[B!N:¬"0s$3TbZ$[:Z\+yNO:pv?y<9IlxȘ{гW_\.}8"r_ `W/)rd]>yܜTVaTl4f<󓱾M"II-cѕIA}c !R(YA"ט3(9Sw^[2JOhk*f9}1P^zgNjg_A+AQ+^Ĵ,73q\ŐIJ`ޛ:H >iht>hYzp92 ǒUu˒6u.;f6OF<0}ObVuE6n-@ $YП@rhyثj7[ 1SA)&ڦQzs p "RPH nVi @1_Pq'cT/H#5,R`D\c !<¦^+3$f 3;$ūhM3nii$!`HThƉ׎:^%`5D aZ +j`;tCy rzjTʷT Qa(?Q=>/3ǟ}z lcI 4J=M;돷HKڊUxs0t/s++ 59Hc-WI(A<8j}!2(sH!ɝU)ʔ/hru [[ 2RQ S\-\V!ĵ%"L\P*1btjMb<b|JTclAX b2Nr&]>s5AG?="srϧ~'ItbDQ5O(9d:[f|}u?ţg[έ~\^> O6HR.gwWDFqc#Z_~)'c o-bGM-lt=ov?tH3*e7Pⓣ񌸹egA@Hc},k/aJ;"]H~b?/V!,@kDЁS6CҐLhCG``#Gѣ9Y~۲Z΃QC O8x=T@U9ά^NVE9K;C^.ĈNo~^e{}N**?{)j# 1~6e 'yHᤨf3,PיCoh>A҆*˔] RSmImT|nIy>ι\wq}z16]O5Vghm>߄hb%f?|aqtؐ'q8:o?JD\~'Ln>m9P[{Q~ܻoW% =Q?MWOTdT+)R.jDF#-1";<%{{^ >_Q3}h]$s/7w˜ lD?sJh1\U)\))H?R y\-F$'7m65}v=>,>fO!WVN%t<GI>JZG+E8ʋV! #Ӏ| Riț#uҀBPY> V@yfjGNځ"esXlǠ!:#^)/ScBv4b.^6|QEњ- e t&kaT Xj^'കB jJ -X{tЫmObcX`i!C^ڗM&X-rwyyƐ!R_n&iٸ;2Ȋ^@w x (q>u:dPdŤVJ?ue]^[..yf5-RλhyW}m{>!rG˺r3ةUV5(Bd`[4rZ>JJJ1 oxSI' $jx& KJ7J=ot G {މh}g{>-@);F)^3 )3\`i!< N yAQ8oLB(FZJc:]PbO5,3 Qh4 c[N|  Fڀt"E ?m~ԕ6?:~las 1(^QKTcY4t' i fʬ<\A HKU7`qUo]'w1=)A%!b,7MQѩ_3 J #0)^^Tݵ"oqap?#yw6)~JI-'?z!ɥ_>\OC\T&ie/ƿ6'&^eGSB 6* Q5VXi0m?o-( ނ?Ӕʩl%aDR]ido,ZbaS4Ou'Qarݩr šHVu{Wf }k+!}s`eKĄbt͹cН@-㴫Nyn눭^ץP+ȆM}O˯Ԧ6@!l\7YbeuEh1вN9T %BW+\x 4렂+SV^+~ TJf (&.4>:XXoF V0ςC#[b"+ FY_ }Η`k>Edh$ =~ؖZngVG]9[1߮ƿc2˷J(}ƚH]W1ڜ3%Voԑdͧ׆=q*LN4oͯj>p YR9}KMĦVbwxޭ)}Fv)T޼[uLֆr-jiFޭRs&l9Lo)$.>숥*ӝKJqVMt1firb%JG;Bp~쇏}Now~<;{TQTѰ XeP=G|T܌ofe:\q84{-E"MKQjU-5At8ջ|} 0Ί)HEN?ZyjEwmHeow4ށv>l9^\KR* ^,ndIKRN89"m9230窎(g{tQQ9!!Rp:&8SńV<4,NA2PJhvq`DqɪcƙRp7?dyxC찾L'Q`-e[g+@Nn)J6=6I 58u08:~V׿5j*D#d MB I/vձzƖ:*NZpz}0LKǖg9`W Q1qnVav %Hp rԅuLt[/Gڭdqgf ?D^~\*< KsCSzPfmO`µX(# ȓn~6>SuD_⾽PŜ/Se_@'-%^Ke]\¢mx_v00Bv)aS5vp^?XXp?_[]8 S8'={u3-^*&$1rZ=;g:|ZsOCLع$ *Q\5QIK{F>LI`7[IiXOol^E?bk4*giQfj,?+U\ysWʳM /y!dyk Y(HqEBqRr q~^wD_\in ަ]ACN3fjYv=`1>^%-avu=^Z6QS[I[.]ha2.D_-\hן"jDJ`9,եOܥ6v7ĵJsi#C*TH 5$>jsaۑg1B{5:9i;Х:Ac8:E:fFt EXmsN4wx68o Ai4*(<)ՅjP{O2Vm1R(BqIZ##矾jT W3qJSQ0 (Mٿj%c"='`Ac Yl)<Yi=yA0"Q\YPg4w (Z4([Xps$xPE`qUiV3J~쐷bX%z];"WWr;T%[$D%oY~Q`^ML?ȸg~~Ǜnn./§x\!]΍/]%~n(٢'*H9xgLQva!TRoqMaŎwI$޿힋K7SPa\7B1D1nжDow$ƠYj]< Ps.Q]eH]fʐN+?Tr[t^T[)x(Z!죲&3N'L@YgzbFrME0%;[9"׆ PE8"ݰRBn>?$D {Kr SnJ3iOlcRA:1f[%F J ^q3JK*F)7/z\CDG&\0InK!'A^O#24C!OBza &΀}<Ѯj ҇~I#iĆkE )8`h_OdOܳ/ۘ%a'/{m dENJe,xH6/q8=9rJm͗MrU[KĻi.(doqxfwO89/Ql> 漵-]E[tmjKU<P%R5{"$|ĕrT"bn1F-kPrŜHAWIIEj'7+.(m48)Y;Ԑ8x2;H)"T#L`B"N` XH1N8aUa]Tsc(ͰR쌰 GCQ-3- ``-saB!Cx4ޮ2|VK)e @= ғ{prO9"LppOp0P f=U.H PdZٚV!)/F埗(^;}bT:1JS!㭈:o,l":QR(*+';VjR٦]y˽JS\^_㚐U[I5}{sޗ2|7ˋ(,-F?^\/ׯ^a4!XNk%%ma4nLC5+]:: 0ҿ"n[7}6r.FawbC`6gbO(P,U͎ j.gr6 J5fӇ;y*B_l17{Fy&̴q"?³ȱcJh1h1'wLeOl: !6>ƔQS@LxC`GHe 8+ˡ+GvI,h6@yQd\{>ZpBB7d=Kɖb=g_ܣ-& Rx)`WҦ /T إ-Q6(Ž> r5u}Hضfuա6Yl #1T &¦6lBxF kʃVV _^4o6dN3 (%I { 5GooJͪ}/^\HG*dXmFTqfˋٽY\%-`|9߻ŷes-&lnV,|?fc>٪TrMLt {onܼ`/AV.*KM?1gރ.KPvTK6J`#]+a)؛޾b$Y1җeObFvtvZBπ2"I5U͝>~4(^ʚ4ۅL#I:QG]N /iRK%*\%D +Y3*(p{yi!_me:4V}|-Ӟ%8zVyp1,5KΪE9;RiOH֪J@hRFcΰ.BK· p &"|Li2)BHrn}jTNv|u:i37.G K^]zm0yu^?,oJ7nYb1Bx4bdRky;d1Tʼnz)z-i0z1`Ɔ˅=O̪WnXJ~1~rr= ;,YJ%TJB $VԫeQOLBav~}Epǿ<=cON/s@Ҽ"e @69lފRBo88 VX+LDZO`;oE9u 3-h~U%5* /9%ӲUpSXI!_E:[>*Nܭjlmb8 +E9e;sr+VvQSc"pyXq{ϱF0R+ÍF0l`&'Ұ oFTaD9RN<[H k@*y{{)7ZpA U(MF8l {a)#N`imP~QD+EX=q\l %4$lB!fq@0~3X %,pX`*&-ET6cq\YC ロddIܨ9ĭ Q"kPo^D3q))eDYajesta0??%;;)ϸr U^]˓Z7juzA52pې}Ě+2@Xglz-)y˨$\ 1^7J0ƟǛV-wC١ YhDB׼kb\k+VjiG[+~6~Sq j%i>=KݐcRcUk+z7ͣO`lҰsb Ď˫{~1"vlQE="(UU~sJZU Q9^W;WCnL[==2XN#wmo)nbY\v r)4t{%b` 93+,})d ɵJM $ek,wa @:f%ņ^8(XƁֆ0],AYPTɛnD{Ԍpʎ:;x8/)^gcꩌX:?/@XS)PKL *;;(@D#!}QWe>J]:,B1EUA5m3fb7G'4ر*I+Z-U!'Q/KO=|VefXm\A1@/c!kdG1%zb1w4n%biڭ y"%S}}MnZ-щ}Gv6Sޝڐ.12+_} j7-щ}Gv`i%%vڐ.˔p-w :'d}q'*o`+ e)Kˍ={XY*q 4oA 8UUrBv2Rmnxq-:Xa?kTyq'mr"pɹ7铨s@D§r͞q6}MooSSׯ^a"FïPGӏ>5*< 6(s~l⭭>16d%+ /v,f @ ;ĚufuRy1m5jЭDI#/Nf|xvt^sKKl.Ap'-Gd^8UthZCmGvEo BXqx(8l%3\Rgp|<9-8E 'պBR@vۇ=Asdª{6YYX%ym*| .ٸ38|06u>uHp_<=g8?~-!#!zyv3<`PssC~r{{>DdXc. ?tYg|ǟS̱⌭=[1j(Lz2jcG$!Q6Rn8t7Xx+RP$fZJxel(g׮ 35釴|NH(V#OnzaɇPQ3ڔ;F!NNf9Atef(,bŇ@( CHɋ"Sp9SPۥ%\X&r4b 9u]N` e-Cw+ш$sOmS[B 0B8e%+ uae>D9Em(RVة!`{0ΐ%Z(kO+$0Q\*Bpm. +w;}sgӕ1i~p/b<,[/-W {Yp%qz|(p+6d{ Jr:GSµ/(Β_,{kӽهI)-}+$׆^cY3gφ70tn_<2\3na3H<8J˺;c=Be2P_b9QO.)8%Tz%b#rV=uPu2 ,1ڇ;f-عxUF2 KyvV}?t>E~}S^ 5B_:8ެf(^-;IA8KF% "0E fŒ%oY{keFfX"ƥvhEEՉcCRAX ɰ M/`HDLI%C<k a k'&j} xҳJލLxg8tc,V$J*4AH:rN @ANDe\2eZ8aZ1U)n=[[dĨ{ENXI<8_lVdK#KNMFrLTd;'G cA5Nl5CDjTd!%:-(( =FnHXϔ\%WU\U_ޒk3,K.v~LYrLo^S?oF^O?~Bq|`,0,)!8sXRx=hOw\I!aMUZUHgRI<s2%E@{#}QJ(GPb֜}͘k._ن#bԋӽCTۦSƏ#_fMAx?Xxy1oooW7?NgN'ng!>QD%_}f7oľcNk?xh'G|9g'<!*ԍ 12I%ah ԨV3sbHU(--l<&Q#V npaCwNBB(6X!Hʆt,,8Z 8`Rmob{a_Aח`:sJ._.~Jڸ<|`@Յ.cIDM; n0ΰA4 lITzFMkJ:)7L-$y Cr ja٪| L`G'q@%/=m ,x?t>Yu;D`'t]3{7i>x05OF)qV?8'\##J ջ_D_]#+o:ѫ~uѦA*G'Wgq!~u&cs@P}r%k|uJįm%G-7u:6"׭WWreV9nٽqD[wj8i~Orjla m\= -|%aDRS*ҜdfM_@UpA>;G pP`1Jݶ-t9vE9tn$C\ǿD,U8l37^۰{bz}Wz՝!Dw0=H\kz /O'ۦ+|,-=3Uup!S1F!(>5#El#b'6Kɧ6>bKJiђlk%}$qw?qҗah/>ZrZy7cjQܹDjC*p_efg7c!K߬>ǾPx{- Vn|PبV\]ϛMc=}sRqGeo/!5͇XUq c#3Y{uOwz2ΠK9z,CCβ]&q1yybOؗ{9Ũԝ)/_T.e 57N?+L0+Tnwڛ YYKpFq!Ío2}[t龬ld%cFm٤hUv@>L/R~}8ѪC;+ܼ:Š*͗,BCY(\¡ErlgWnXL"##Qܦo֦-[T@a.+QdZ4ϔBKBIA2 ҁ)qi3XX֛wnxٙXN E N 3v]?9XuqX5) -4,!]TԔPP.Oo<]Fg:۫/ooo3+jۢ9Cz39h3|VIL39 gr E4JӾp}nN;h38 LEO4T!!/\DO)ɜ"K~<#ۇ;‡O#•k)ϐnGcTu?BpCvOwy;!5 :z٢MbRJkupi|\D5kmH /~wdmIK6"U4H i `KNW?o_yM3J&'WÏEy7F˫Ž/)1L%%0VKJ,^MA`bܣh$7”A1O%P tJbL05dL郦,w@_ 3g)&;K.gUn/IHɃyrrjCU }ͻA!H7^߼遼OȓFw(Yc3{J7j3%q[< ܲu%% cӠ_z8v28F=)a6{gCOF0bhv]aR}3.` 0LCi.=7ՂI7z{-rRlrYl93[Ոѻ]~c?kd J.F iJg׈TMU?Bt+Ōr# R݁#ؚ Bk-VHi(L$@X bJ\ee*%r^a[ 2Zsf씙ŋ텬{fb{`!Dzg,^<)Hѧ(~8Mʏ!QzyuБ0H]-+}R~W؆1=cc}OpHz\\Jf7#udt=>J־8 +qðCi)hW d;Q]wda!t!Cj2mtw9OG?[Wi9#v[.YÉ8=f0J;k| 3rJp \2 14&gϩ>6ny۽g Z@_) Ø2GsɅJ*q7OgxrH.9t`mϻiWO{B8{َ_]]CJYw_-n2y)oy%1 ,q(ױpNkcit'6&-Stm#ޒ+G^am/xVęm#~:ʇj:-%,vJڀ:FGWq )KM5Q%eR bo˦.,DE*NTDE*NTTa~؈i9'X!IEveB :` R+aR.ä\I ʥZPrTbd$R0(IHR+dm3LsM(aQfj:PTM_)Nd')1(kC^? f_y8lQnbTevs^YT?i17coƓWSքrIv|f>}7-L@fN@S&caFC٧I.>ܓ_ȵmP(J+)k(-u ir2X Mɛʤ_Z# #Ỉgr4ʳ*W 3Va@?\/](Q;zz95VFơa[io@ zF...ouJc]ɂ^&?n8JVHQx RaetB/_z^؆U%Ɛ1d[xIw̗#=vͤ_.Q6V^~g۶ȚDdP-f{Y;3IIg%g դ! f,mlgׅr쎝TqVpB~;h"O~Z܁p?9%;rDuuA]'w6G(Vɷx\THdsET 8v/jm|oXv9m߶9y;r-{O'y=޺o=٢[gD2 ߲=٠`!GҕbI2ejcGuLCDNM7ү%p$p$p$pXjbTJs(C0Apk)*5ÄELyYF/1#(+3X)$i1x/袜Xt%1Fg6-h\R,@9,9 | J$/z=9'Jyf;mݫ>l.pkIPFL[fΓ!,}^\ 90Ɨ.(]X?-]()=V򆋮J,L0B>~%P5I.gW Vq07z,ܹeq+jcHbVDiJՂH/S!DP^{FgYM 9pC :J9ї#Tr4 civV]J*dӉ8P߼$XS'>tN\a8qD1/f"˕|L˲@HHPЋ`ڪXXHBt)/+,)).=V~vNwTcE9)WOoU?q&2);uW' MF!;]- dPbzp j?8F| dIx`Xƫ KͯVv|9~,L VD3kP9`]>ـDSm5/R_XRvtkŚ KY=W14Hڎ eZƷڑ!彷e1j`>*dכK#i㮲XZGyW> fri+նZPIz8^IEI܅]HDÔ>˴aC#g:ͤVvC楙Sp}96V+$!/\DUh7]mGi:F}Vszn'ڐ.;pB:`ۡNʺz9߫=$ _>&Q~,5 c21w `v:U%l=>%.fYo =-V0Vceє1vw͒O;w.'|Z0Uʉ[Xg炐谥wLCDٜn$58L0]^V//Vz b1)B@Bd#B{CMtZ #QaVf7n:PMYPAMv/6fgkidWkKp=z64V͙P+]п!BmD 2_jfo/2%he^LM`21)t(J P)Q`J+C&(5RM%*H TH0-I͵ -uz\Nj΂٤ : Mn3V^Vg׸[;ku]a-j`%骳`"Ss `$w‹ (#QPUҒ0 "N8ϽZAv<:gAש1 /eI- 3h BGT$ &p+ aH~D%k=v{f+vm[)Qoq5f*>Jr|%ާݺjK 6۰=w}8s0oa \K:y|:u/~bkELϓ*yh1#4L^ I ܛ p7 P @o`+ 95^V !؄eYVpσY[\pQ(yzMi؊f4UR,ptj,.wLaKQhWh\T;.!( $T4g gn:is\>`*vgޝ/t J20-}spɽ5cStJ{vdw-~ "W=C_o/5XR @ݷg~/?'LgR0GpeJ|?k+LfE c|mznݘ QRtWaFqb ً0u"ǶT!UUh!sQOi]r[N\=haTpAu,"1 A[-5JÌ^4qCLC7s{k`RpMPw0ataH{{@D@,daD=iĥ@r 0|H!s:`(0Tπ6Ug_On0 .>c}Ӗ!'ӫlW*^xMv; IFc4B?631 s!QU*m?i|T( f $_㓳nxnj!RRۙ8_Ozx6b>^ Yw!==:詆 G5!gqq yPKZ*6]1@]1@;@ FՃrR` 0NStZALwn&B<&ZaS(`0^ : se8TJFuQ PsY(N㛴,D +rUYߝj͉HurHwq03B<\aPnRQ9vD$L [Hй"J5RsAd&BYRRJj#uei}t?K}G}YUiUnЗ9D6CBsM)g祐`R1wnE_Sh-(9Һ!!߹&ɔ08YnĈN;X)W&X-y֭ EL,{.z~(6fnUB/;TV뢼x)p0MTy)X;2;|[B9$- ㅂ8b2+\$"t  wT}S`'!:܄kmu]B!Kn PIȃO538$sPxnM)M>{Q=v@ ⟝R;iݘK`9M3'W=ZTn#<-)P:\jQcFr]d̗*Iq'IuFZpưj ֮7F{||Af0^NyZQX7c֐ 9\Î@ә/.β {7uo5܏1~~ap+./ѿN=O[SģMߑW$;$N:׈غ3NwĈN;X)WbZ>$;2%T}w;~x]ML4J|\Cv%9<7AXU,㚉cSrWp?3!սBJ-w-~]ݒrǍMMm!'_ZN7g;W{"Q8=v,ܟxH_`*'ÄO+3sȸ@c8{ 2:$h1$ڣit6ARB,l\# D U"Պ6 NZ N2~ # %ta-dpQ_ tל K\(JtV2'LR9)isDAIJ΄ b^Tn:is5D+EqKJ6>eև+YNo훛NOn]`~Ƶ nF?Y| Z˫^P_o{(:Oi]r[^m~[!zhOS&.z[ k\)N8)3g7:y[#YJ:RY`*GOE!h6A\MlY@הGV p+>U83Pi9d!"BZY.%STMH; *@ ʙ!ᦅɡ$Nr(Yj4[M7C߼iNq/?eIô=a|}Kk~q̮Tvb/_uMԵ$$T/şNKRxYثmdaK%/tꗎZIDSvt׀/Oi.q)B|BdYM!VJςnAw jz7$@)xЍVh'+4ϥϔZb4IӞG{'äzpUٌ]TD5n?@=kb^{)԰MozΧ0\}jI⠩ WZp%hI?K?-ؔzOWʟiS^2Zh-X&f=-3~2 N*?Z 3wN{> EN*ɳR9%'dʽI6" MU|:\#ȮbƃS Ц3]=͏v89 PIv\<-#~ͮJ=DͅȄ:VRA2ʘw,)}]Ͻ&*Mb6}WS*wΔ]Xof4"@7bX3:u+]]}..OaaO*D;p:)C"x^~ wFD1[QK{d~[FHϾƆQXە-XkOX!5aAw\L+|HkbwyW<^ oR`p|w6tͼt 3qG^q=28Oסk q/\?&fx0?` lSJd)+K HϬDE9Qs=>;~+J~XY s$+^[1!s3p)J)BxS_2^UɅ%QBUITR)OZQz/;KتŶmeju$5sjK ~#~ )HG/O3g&B1`zRC .sGqʴvکſ|>lyr66WٲH}ž2G &/qixˡ|(Ǯb]ȿr,4ǴKÖcP妲ӓ3&'wpEI XGm! '<4, ϩSRE Ma༓M?[B0W5c@L"A! ] Mh^ z[%3"$WDp0b1fG/L\ m \k]. 8И|3c\nksy`U28 t)Z`'2G o/A͐&=f`(*xg&Hh_ߢ=&[7(P""C]Su[DC2v8.km׳떴Kօ~Q~jS0ZG#O2?Fi=#4aM;cp5G.CO1~ʕaoۦLg ~RhGrM+ FrWpGAN;Tg7]=0$lt3B~9>tihnp]n|A7AwH.M6aq_ib(M-BM) CēׄmWnBP`C&%= ]Hɉ1*6 Ւv4 RR+*܊޺A~ښ7FTJǵD\mi,3h04g3-Ǝk_l~G#Za0hgP.:Cȵ&f`˟|s>'YE$]o.|cҝ쑍 v=_wW_\_]>]a'؍??L.M[}|#d<[T娠Mq#/ nX?/{T]zëk}n#7 K_rYJx5җ*WW9I>`I1-Hnk )iHEl&.,15F?_sHm[<~<ؒ!´f^̠^ Ik&`9R "v 풷 }?s0ogB ]c!,Gn,>cܷOUiFuT ڤ6erpTh2RşL6T;4 t]7Su!!?Jϴt[[Nwn< {o>Y(Su!!?TfK 5A>+Lvl9\ҵQ;;:?9jgkw|Y:gyϖ^<\Puh \U$::k|Q֮Cfd< @w];11&TTJ^5U _VjO.E Yeo,RP{DUgckpo-CeUBv+j٥ŨWrzl OdtYQ5eCZ;VPw}g"f5+˧'ӫ^X?nq&aŚ:a%iO둿́k{20y纮fW:XUٓ}t}/'glͱzvm&G ]ZCig/'55a" S誠 ׹樬7+uVj^4(M&CzVTnJJSJ& ~b*O~={'{h訨U;ab:@ #[>^h7d(k-59t3:VH7JI!Y|^ *\A`x8\Ȅb1鸊jOWqzS-]WfM*8`ٴ;GJPm雃>V5:%4%&4Z;I4\&2cH%FmIlAbԢZ"D%VDb?N|$ tgdPgBɆ/Vt5`c75C3|j!/謹F=>t$re[W|FJ7~aǨQIih@0B vy,-";d .pk.8C#q0KYH!5c:މTxMrX#s`"EQ(#8O@" JZPMWt"@ G=ZH%,Z2 igF[D!c*&]ׇԙSPJMQ5I_nY> `x色:pkFGeA"jβqib{8*8.oGikJs-`\|"s!($.C+Y QFGSTyJ2Zq[Qښd\&B;^"=i4r_ ƭDV[N<ٜg7[\ n"_CNn?.~ȱ2rrM|\QcD9TzNj nS$S_ {ò b(ZmŪ)+ f5L}d6 5RlP`E/9LZtKT9uhD%lcfY!N_DpAF"c"CXRDoH#ZnŊ3]:PU. -`?X&׸O!(}M]Yg;R4_t8vV}a(QNESvԯ5Ԕ$xBh0P: I@.Ok%1Ge0jCN:;z  &FJHZZ;[hBjF͋ AQ3eI$*"UH*u<Ϣ62; 8LwT#'G D+Ѩ(Z)DZua A#Է`LWJPR}J7]7O 7%+.8.6%~=gVtVvʏr%31aU o׿=ldr}s;4p<}{%1QA a NF;ic<8LH@xTHA1ANFK6g(PJRxpJ@l=}s.{n6HVԲ7$KsHwOh %}3S2S g_%jqaǗ߿)rbS4>fwևR|JaƅŨgقJm,fKb?dȜU,x&f5XɊiz2INӤ 6q7bH"(9:tTʻ|_rt^un KKu>h-ɿ/~. b PZ}~Yn8IE)*>hu2 B-OJѣ}<)OK4N*iՃKG.|#g7 rpzQX5GxdnS@ǵ8'W P~dzuZϟ?:dzO|2]A{LNʟSQq] VjA3a,c *QU8I o:?9jgkw|YZg9#-߽>yȜ<9+9BהM)g~LԠVPk)Fct,`d>1JErlc"匰Md% T[796T)v۷g QjIT6*D5lŻLx4-r.o-4eyլ jB{E׉ԓyv'UXkOjJH^"FK?LH%Ԏq3¶ͥEk1q&6.(A=:;o LU4cs[:sFb)MLK&7-"Qh aț<:^NE %`.  w[u9yJa d NYyΚՒ^e՘n a8\#  9pB̓J ){n{dzF15ߟwRk$y.@}{^Tg!i]F˧ތ UY]WɟK'~Yc2 ekcN޲,θ:]a-(u`o9'( + Mb-ſXg:0&(n9 Nԕ ьhETT\(4ƹw ߊQ(m8Zs977qA[Tm<\^lNB(FG)ٻiPIxx=;,L0oIA_WRRaEIԶR, M<ı0L5vN۱(  XrFؒ6\cT. -b3q-S @ձ S:rrQO T a[PS2l3, oLZi9תA)'lTxKyg+#I^G,]!IQCR/i;H&J"g9]EFdX 3cڲHn#Ukkj@l\גRT!;h}CU Iڋ۵ŐWkD#v0EQ&c:ad1 XFnefvD!]-"wglQUY9DeN:Y[V`qp}*e=!vlW^PJ Ds|\cR%Kz}TڈخA(XI*k$RxʅNa+q2PԎڪɄZlɖ[gdeUȈsjK\*-0 ܎7I[]?Br^Kr̨( p* dmػk 5'|`QϤ:Q _>:. 7 ~k~RK2%xtG_/Om9/tD`s9댘g'p>Vh݄N? ŪyW.x <*^1Oocg?ޓryq&~FݵD!,<;{h @\: c "4GiI6FMw' ȯOr1EnJgX뮱}:9fJS#oh} Z1 h1AUY2:d42QRsڔ~ʵ:cz*i%z+1:{溡~V$""1L]X"PR͝h=`/6h6?`)n7 .[* Z4TXO&!<.=x[C;k;bŝ ;!G nYQƳqJm H R7 la2em jw!'eQ%hBBt=jTv_[[,GbfŃԥW j #P[ûu 9 ˪@q: qqbh AG;78)$HorT%*Ip212!dJUӈ;;KhI dmYf8cS%1$3mRԖx:{Up&F 1&zj hD[_[X'ٓN `!E!,\(V+""o 읍E 'c$.nlwɢD)%$ĉ)F$dqb#Q T3da@Mrm,iQ$fYkN ]^k2ՠ (ku#P[{KАfVRXP%ǜE*%DAI3&hvi Dk+,EG榨1kg뤽,G1@}ҨI `%GoqbnMBfVр<x*O\/VEym80DV=/1_1]2F@!϶'ǰ2W16߫W;߼>vnRu~ Ű O66OXלnV6ߎ¶LIdߝm^Ǫ۞Ƀħ5l%}r]@ MWm'lrLӬ3pm%8Kwq9VE[8݆χamɉ7t:':_3ͺPWAN~ecB<+(:=ןüz="/JI8iVBZmMGDžhW~ݕv:i6Y獤;KۋQV"yQTjPR#[dzel{l"A2HFmve8t˕{{ؼ7&;= 'pƣgW}q51n}I--tp\R~d]91O F{zxӬ"|2%mY;%*PB>sm#-S5kԎ؛.vhRtOnMͅ]!oQKMxGW÷v;#o ?#IamWc.™jlgeqx;jTgQOW:#P{zjK&0;lP?#[L-k~$J)+Vܤ\C?m2AZ<˝8nmߵ[+Wm0_(RWO:Д?4%aQ ߎ?wo`Z= RS]P%R:_T$tta=kުe/zx2ƾ^K]̿8.g]&O;.>K0~Rٻ#x Ox.}9x f_w ȓۊ[PSt2]!sl b0fL)qH3 >,H۽-uYh{ۧ卓BڱZheDAɈ谢YQH)'3DW3)U,|uEDwTTg -ί#ukF3 }1 [ AjzZ.ؼrJ2%VvSo!#ֱ&Q͏JHhBޙE'2K7Zz"γ]\hpCS!# V %U#)Xǿ奺 f jUt(+P'sjAijYzppRǀVA+}A3$z H-v ^w$tGUjΥ7tQ2*ոNm=$$HV9R]LJT{՝2[SC0ss֮C H V?pߕ"ExX5&p/;6 !P{|,$}E1 U]IL'K5MSǭEqj JOA ɐ4a뛾94485muQ'9-Fzc)tUl(QIyh]gIIs- \0S&7.UeTl$ R=*]c`5# I&@^x+j!mu@!j{01C]VFض]7Uagʧ IS25T~ylZOZA 6Z9uP(Nw]Lu)cShlbUS~KuX&#V˝*e9P7&$urL o$k+t"eԂ8暪CQ $( & JbHVָt5OaDӵ R̐.;B鹹MGL;{UX^7<"^AV&ZW:c2]%H3&Z_ė2(#d2xT崫.qEɰ7Ő+&,wz(5'޾g˅hM 6ZYʅE.Q-X2]{FV-u B ~</É4]> +z.G0 S cj3X&fWI #kڝl.9ڨ'zuCx?\{΃{0֪ t+:l+qTo9Y9u,;:kǴb z֥f[i'#y^h;Y'V4Y Z r$h9 ^>XY[ R$'~Ω!D ?Tb9f# |YVvpb842kiX6U*(A2 R$UAkaH{簪5=1әs&VxF8/YT %DtտBMKlYy]}H]WI."PJ뮶 u׹+1D$Y%S](HZ%X͕QhU}BV~o3z)"e"3 C*+ pO:k_<=) v2嚾ӝs(OzJƾtw[˧bo`4v \S ѪF'g6d{޴`s=\ڌ^'e摸 e 0l}r q"5`܅!42%8:|\* F@+3rEADsF2GfGAI&|k㾷Yrkxٻ.Ή-X'x'E鋻;tؾpoMJh['vAV'iU,p0hP {RC1tZIJH9Fݒh d2%c5>z3_lc5QLTTgJTffYrRP ڄzk5ǐͽaitoUcGK%BqƏ Npβp?<RCQuΟCek;rYēBQ譊P@C@)iz[Lؖ fsH:*]}_ItVa="s9 kTS'72J$>8g$Hio*fD+0D_Z\$I2nFR? JTv,j4@-ErpHRtJe:HB7QFkA5F4iUP Fס^SNM"I/*4$W]Ĝ4$0KH++R$WLJ㝅$˗aHK $TQ`<86 %\ WV\b6~$i,)ɜݟBUI 5h MҡaY/ `Yk㰲BYL̑ݒvlJ'~wGMe!`x{[}ee3Cj6.8WzX'8b'ChY,_v+јJ'bȉ䄉ymM0/fj߀L7TJp.imFOlD8u:,Kk’ڡY2E=ZA=6rrTL/c*F{"s'FElQ-j{,vKu}Sg9Gf2Z`z,0iG~?mQaZ[l0n.eu`|{ioibC5?!lc섨Ke[3RXVb/8:R$[LT3U` <8!V@n|ॴkBЈ;4kBY3{/[? #%2q199f07R dF"sH$6ʜ;e (-0+tΙ"hؤ_T0G4qj[39J֩֎UcO x}/|s,_h .VeEHn7>|qV24şԞ?!yyVZՕI{*-+ Ele((`/~NmquvsTsCWBǤ>qokr͋ǰ߮/oY[6#뜜UcTOuZk-+/ֆ;tPk4yepIA'=j b.q1!Ð>L[bPl>F[J!H%բQZ2OxA 4ZQ96ryK+h Xާ֫CҦ{l0UH^ԉ<7 =\sxݬj;n߄f&a {1!Ooۚ {Mۺz1H}'J7Gz=mr/SI@$6h4ٞ)☯Tp-gh\ų#x\>6__,MD$Ll f_Bp'w޻숷{y=ԥȁmo6$B tJr]=„b(sW #=ځylqI*Tgiܚ 2Fgt>o /pR'XL1,+n?b58>PQ;kFPQ;c3Nu #'M|yjtv?Q,fDڏ"lGEHa&8GEGe|h򐑐@=:,8ZDVwz_p3ɡ-П!MJM'w' V 藍E֙9`z|qK.c'ziB>-M-cl3.̪݌4W^XfOr8n[-9?xHXޒ|P/`fAt(x@x{_7 B PW&9 ;حsj :r=]gDM~Xo_88m\2/#<_V+{p8Y^rywsGĕ5l߭/.8],YmFHӟU}nflCM[lY׬66 # ?]>@8yW*J*ij8qg`]7"HO|ؿE,qa:CKW=dvF#~^,SQ_eL6^M/'}Z\6ho/Ģ3+ ]t =) 2mjzb]f%Ujڥ]eK]CPp4 tdU;*1k㦎U-czq _=K}bfɭް\ù,=JM]Fv8Ǚ>gL"t%i3%tgX>>1:r8G C݃3 ӠQ29JOmV=}iTbPfgg8 䂟daVWƯrm>~^,_-__{Ƣ^yz$2ŗnk%9љ:ywZ`^GNYٗ.8ݸ8u]\Kɕ5fVvڞ-wxG򼗪>-u!6Wlfk7{"·F"=u`6"+ ئe!1Sџ򗎦3fMG8X8C0M=;άnmw|`!VYKmzY'oØ y}bڷB$@ښ${ Ǫr:$A y40꿾9õBJ:3PHie>X ЉT&G*<{wrc!2֕WU@-24l)@\b)]'T3'bea4hF +%76U, P7uE\YCⳠj 0,3++ !j޵6n,"e8ܷ?Mb 9% Xl9k2=p߮F6ܪT8ffzB=/.$\Ƃ.>} 8Վ$PnDW>{q*rm˺9ڛwxJMws5^fO5qB&8 2 G>$!_-V 7<4T$3u57fոBNwCRcG|Q} +!l{sp˻슱ҽŘ~{pOQiLdGt@2vNG(dT$$a3l9t X`s a<Gw%I=j-Z60׾N XJX6]0A$$D ?L1 ەh0""c3hh-/rs{eDtˆDh1G߿T@#hqUUS؋b:L/4jYIbln Z |2Kd*INiW JT \BZL ʌ4tsjӂ!FBw5U6so]EKZɿ;wz cR4殰1xtWhKErx\Hn `Gf!ޝFSx*8;ʕDZ+P%H ˱RTr͘l~ & QQ2Fȼl{C𘙅ajG: d[2+; aFnJv@0f p}@k ;M/c- ^-ރ/.͠>1 -ЄZC&WNr0 gh-T9*.fCbg$e 'iEQvWe?]tΰ n% *82o4(U cFpyy=ԽޝYVOɵBkr-]YVbaCŋtf4amw~J$/+ۜw|:2N|w!.23_-aƒ3Eݬ mek $ej`G1qX@nGdpgP 7^rKll/ZM0Dp3MK4ւR #blt]>ͦ>fWtd)k%gKhBِ:ϣ!Yz_V_~r,kwqܑoҘ=Ωƹ(K QB%QHid7^ \vp sLkae Y7Zd,cNhbQ E=z770l-!zXb)}aD@Q Dnjcò @yr;ۉ!q~] {g3f|k֚m݌["i! !Ag)D&Rr!i^@ )`Z̙o5-8!IINXsU6U^GB!u֑޴KȱJS(3+2c3'\dR* f i==:FRhgR!c9Er( vHR@!k "ƘFJiUQ贁R1TD`!үT(0箔_U[_8?Bwz;.]+Ϡp[%˲0vd\u wI1opq`\/lB zEζ}U19;F1)"y)6㇠ƒp$n,rl8))'U)Z@gjFzz/'^^sEfDasZ(GX|U@b;e2Ucƈ%uUS3sԼ5l{itstyw+s7}Ƹ?5z0cjNb&2yru73_~SJ*iq]\fOǧ嵙ѯw%&bJ^tjTLh?Η/ajl9w@?XY_{̺.tzXeAJ,=2+]?KWOGJ=֧bс< CITt}+m#p믣%<:QU-@s"8v h!S'Q'| ǘ}m_?I}h5vLrd%(L,)=ơO(2|4lt #'DA%uׯ?<|+Yi?~(ŷ}ტ[hݹ~% |jƇ>(@P49z.aȹvA~+,au~[h@{3oBC4ϸ->:#C? SfpCTP;D*;UV*#nTWQq19ujKLhD<ƛжO m7GQKV{;:":zk Tapp`DaB@poC=:*!~ܞ/ ƹ9 %S Cfn!C8%a_`#A!%*c gB*Ø0`vбm#cbN%.K:f.v}/h.Kd(Ky Lp(UDd*@3D%˰ ˕* XΈ()bb&*VD qI GDiS%'%G:,ϰB*K MkE["*xkIn/-/BϿ'b}QZF©ڼ/?k<["F?/ݐ M'|<OIo3bꧧtEHɩoQ6eS.Ft~ݿq cl<$(`ev}F3 cF.=^RFdg) =q0&2O4I91g6_$_xVJ[=qB koBȄ~\a,d:KxJ!a̒Q j]y{T-wv1HD,gۿ,b˷Ks<݀{{HA<,'z{,d[Ug49*8+H\J )8@Ė?S<7aK0!ڟ1D֘~::KIƚzȻ}Gb.w%#\275'(3^)%<%e^BQ%EiT(!gU+{GK"Ɣ UV9I^Y/%4:JS6BaJRd<-p!EVhSTfADHV2UirSJ)rd:Y~5RЍkFW4وIVbB p8H%Js!9h63fδ4StT7fc^*Xڌ(r!EcFfdm$+KfCDJ%2R(LF!)-%FܼLߐi&j"kWa>=NÔ0z)%TqF8 +R6l23)3*r aaJ$` ٗOA!b~NWrmzԘ"jQ&uLN]jNtD&Z°'G$ 7e970k"BE7~{f*[hAٯPĨunOPBQI8}> pDa$\r2j.q ,JxdIM1brM"((#x[8X// P YݠFaWXwl5 UA ܵڑH|Z;lpԌ܋q;]jA/ZL^ Aw)_^4?Qs֎Rk (VE*ffl ،լ7ӟYY_سU Vc0«,qN{) a܎'T gt"CQj].ĉ#d~5.06܋/V[z)>8G]V'O[t;Z.Isӣ6'1\ԡYfusFQDsC[=.$etiW׼9kӝV=C#DkYgE)ҭ`;R1Z=:H^Vf~vLr,;U*ȿEސ.*Z'8hΠcIxs$F[iݣsLs{%Nխ\,9$huJ9=։= K\6SԖDaKy-uRiQLAWb9ڧSp<B%.;ê,p[&SR-8I?(MS\|࢔_jҴNҼS;d;bm)sm%RX.h: ˼".M&Xlh۞JpӁ}(QXӇk% WelҼHyֹA>f3m[ڃ6YZ= j|yfA/`fKL:H8[sQ'ūǹyV)Y267i̧ihW0ݛư~1a%B t8Qr7Eo59_;laa(%qLwOK6ړv9:V0^Bof>Jltebj>#9!20")QS |oi&,gÉN^ӣ9z0`A̗hך>sg=3gZLp{d'd2ą˒gv*h# |_ԅ:yc).hqwC kj]LrdFQ6_ci˥S[<+~XÌIq͌] ?)T?.`ct麭 l8Q[uwT},sퟧUk\֚|jr5uׂ,l_M4fR8N4 HcEJ($O$`Fɀ dϏgN6+3LB"$:"X(1EXsF8,!Is„\e4=LD߿ ˒~0I?4(鹾`v/nyVqERm~{>IAtfn_:z0\Wu}KFrx<}!En{+A FT7˿"=zv?:Ěp ~oj2{فO($TOQ%ӌ{߂etOT(J0W;뢅!Bb韄wG9]{^QV-inMuC 0I-Ki+s$Hwpw0f)XňR`g<9e(|Omd#<̖hFPu~xz C<'|0ItCqc@QP MxK3(Usbܐg׌2UeA1Kh&~H=TEoȕT:wr 9u%Ln_J5S 9`ݼKTf^NՁAmH±A )P hBBxB 3HB%("& 46|_60;cg'莕 >泩_"2Rrh.12" ai  JH)i3c͒b.xU޹|vu,QKj25r], ݻ= ZwRFׁN7Qa[Kc(>t"BӉ!ԯcjRL_A$)^M=mzRk!49sp]\% t}Oj%i:}zT|*R{m."JϿwR<[rꃐ|3"+lT<^G ϯYdέ^u_.6YDt56#t|K)?f6>cۖ獽y<5#Tik@iˡClr~t;<-,ߩdړ]H2GevvV*SZw~%ynT/.*pik?Ѻա!?蔦8Y7uEu|Qźsx{uCC~p)Srȅ SfEjU f|vl] {بbR,X_x^}e Nϱ9bEĵڝFjH+C톋- E \K{Ɏ4"A|31t+FXvT{3Ήh/{tvʮ^睸-R]21j(ifz;S{HCFɮdk-a+*09;2D5;8M"~]!jOBkՋVż醿J!L_Rעֽ9v($b% Y]=h`HCnTe|]}t˓Oqecņׂ9F908O>4w#4W%>.+f0$.k Fin?٩oYn --7/ z03A`Nrn>~5CnJy!PB)ٯEInڷHC($9,پE' Q-s)9.6~drKO#C <$`QFy^NŒ5ܥ:}`0r!չI&Z{`wcwn u"=_6ݕeІ۱ES[Y2Jq%6M=>[(> LdZ1g`a,:?6i~<]FerYEc`&jɣjDZlFIM) L׃ OkE0.Q x̱8=tr.b6>R(}1S?6c9"Mvto{^%If885#O9甴ǹNN9?$ZꡦY?LPXC/Xǩ;A&[m^M5OxY X7-RZΚAB OHf7X݆btL)u4ڎQiS_6ݤ\/UB5GNRZM3[=? c%k&X:0_Nc3NΆ+DLq)AD.À `1֚0!C!Um_ieX` & C"nг*a|jBb0$ь&$Jc#a"@QLXFEĜ,<[`L{]K%R`1ŽI:(@m{sZv 1+͹cDK^p?ɹBKYp??S-rdžT BR3ΤT3N*i2fViT0,@Cz{*ޜZ !;D)X=Ħm4!Qx絰b5^f5~q!C[:7S>iE+,}A Ъ=Kk|q5OgJbH&J$yp(zkF~ƣ]J[ױI4_%K1=RJRhni_?v'ў"崽%G'NJFV3\S #;ި mY>c %`L%cĎʲ{U@42[w\ /"v8ZG~@%eVU(9b-a0S`U`|t0FNA50 7s |z9%k툉N4r ԑcE::ʋJ/̛L Wߝ:C[`BkO %-}BaS{W{\^^%_rbME\)?v>)?x_&C]Q;, xչ,`޺sph,HY67?{X.RΡO0P;DPG0]9^]q67"\lRbRnfR裹.v7SW9W.^tVy,56mbU-5Pp2vZӢDNk䆀˱r醀N 1T(i*'Q+  %!1Ȕ)D N0 R-l׹!qd[*H碻cjkߵt(EٝIQsB4c猦!&, Ha ÀIu(9}UE6 &]áP UOoIQz'fLpou3A_r1l̓FɭÃNL̺Sa$h40x*4CJ0jՏ{6 MvpV귛To6zvpם$^}`Ұ}wfy^t.X_Ź_=\*IRv{h,ZGmo>iKYq w%P nmla~n,3m`xdd^3;?'\j-\,r68:afJ{'ݣ' K9S#j3OK)1+β!U.9mMVj)sҝiǶ-6I<&ɘS}Iƶ>RdzyX!]M)ZO&)Z֝X-FC)Z=5@ש+䩜aǕӉ}oM ̓ O/xH!/4Cttf%?B *2ͣ>Sۄ+:R62:gO9Yb+h-NI[7(9u}u;bZoۺՏprnM0S{?|nt![]P;|9WTwf'vnM0S):`7Gֳ#r>4uey<B¶֙|Ղ=iÏnI^(/դg͟r&t{;OnKw;:7 Gv xF- ;\] 6zk'^o޽C.qCl 'ca}acd4ٛ:d0_aiq^<_?ϛlU+ae`" i8{;uf2Yet(ӗr+q`w~mG[LW Ki8nFl ܨICef-׋kԄ?F+y8,b"%/ƹڔ+ˑl0O}JFRC*;͖<҂` 82 @S>6#QqJ+|-BwĂgB΄R2J! n?rAӻ*U=^O@X*Poi~z'wMnf#@B bQ V),)1si& 5yw7 =fj3?|nƭ5ofLPvOD-ع qb z7e@.l4+c6~ba~*}.=̬pXieb-ij{#%GY6/{j"[FQ ~ +-Ǟ0%m&sd:oFퟀN&w8u48;)IUR" H(Д@ I"BU@bFMj #H/k+]3p>K4n9m ׅsӲԢhc^K8voEHXzXiYjӁR*XJeV,ҍԒXz,čXBOXzHiYjRn,`YJXK !K/1 bӲԂ^6K)sc)e_^27|ͧyeKY;{?,,5a'Y궫oe'`ۮFjb8lͮ>~mW#4Y R@R zZڼr`%TP5mtXzXiYjaҋf)n,e|m^Ӳ 0/lv\` tqA^|RF E)[MjAB֖5PѫW 穐VԿ~h-Ki}ԫ5hxun9_RY 3T~37 L&&N]3b4QI%h3h/9d9nf՚H'hxNstQO3Xn[t]>.էg3Ot H&N'Ǘ>BU ǒU-llܜsۦ4_3ejkAʀk/\+)ED4M")g\X" Ј52QFAk[}(*3jRSM}:e'lYmOxVu{Ws8ЎBl'?wHN;8 U,> 1e⡘IPt:a>-ov5\ P@J>&۔gyx/Ýj6˞{.mKg@h$TB"͟8PB69:e&<@$ GrFA i:]{Շ'{.qd;?8s›֣!]ְz7l"fzM=~~Ų/)W.쓟gO.L䘰q1goS-W 2sc8on bh$^jMŊ `syzv 8Wz3ϲ~pBQȍ їkRKjf*%'̍y%{1U6UʥzyS%}cR.gi+zLj$j`EKiVP.˝Xjtg6?YʙKy;ӲԆeTp7 [c!,܍8Y ҍգ/&9'KA4~^2KRβTjzZcFsۛ+@/U]} v&8tuM%h]M%h*AWqVUDbἷru^@}ԯ퇨^8+k f0$! 08fi$]P0Mɭ\19st5G'tlEw:G0J5 d9 ԧ(n[t$ B|M'=q>9G(ȑH~c?{ܶ K/[gwI*=lSl\&/[0S"ËS E&8Ibr, 3mf]Z?\[= |o`w@t F5Z)$N}y00R%A1 jG`|XNkGˊwD#^#JM{D3BīȬMY04O15uduGV J;w]0#Yp3kCi`X>ɂYf/NŅcVE)g{x\ӘTEzjEh$F`ʞ;w*#s d,\m<#|b^:ƑJ\\^$0MFa4YL96Yl\ ^{*|$'L|$71D^Gc0u;H ?q]Qvfv/QY@-=LqozMEZ*hg(5KQD4I3Vv'\Ȼ`oifG&ZvNj==w=нWc(=7ƴ&9ʝgyk K7^{1}+j_1a44y[LiUSwBv>eTuu?-*nHZ+>Ħ8TyٰZjLJ룕QŽ)ϳiu=>ʑ(8ٵH Y:ϳ:锴j:jF޷M}t EkyV_ w¼Ujub'<+3>@b"a;xOe9(Eq±*Œ!bu':Jb+"Ȩ'8eQv+KOP1B|Ƒ ػJRcJ8%22dĩ1՜r,a,L R:@^DZi;T͠3Tt$gh e%Wx Hʕdxm WxpOpz)d,SB13~BE4bQoǢ[Cn C;}_>!:oyR)9E'Qd$մF_R Jf8HZ[y\/DU/qԱbUZwl`ujW@?J:Ѧ|~HOQk*F:Zj ,#MSmdf\ml$)cv3)VKYlD@OAU{5e/U aYTsdi S"MIcRK1\FZ$X!$yb@"`EH&T&,#p6$Q°sӔIm!Đ@\Y( ڰ/ V/+BmHx( \g}@*$CqRRa"5VK.MHYTDi$u!j PިO!O)ؿ39D<raS*`b{gM)1Sw$ J1E9N\H-X0 Xw5IC)ɱ@ɗ/ZÝ5F TS*Q`ר`,'+eZ!8)0ru'xSǜyƺ PM6w98uۼʭ-0szhp~s/0fjJ=i|TI(R4׳t߳:qQErVsQXFA1d<wm߾|[}6%J*lč,4hNDND9/~%&oTP@JK§Y f3C,Z[w.V$p}`nBxϿ- R){X=]jaIrԒL:HB "R)۪;,{n%^j vk$vO]u߳fss<^>]21|q<p[Xwϻr7Ov}1lpr`glHN|qsFlFl rMq_@eSVq b$;ލB[(>2ޭ@rNic-<ӻUa!oDlJG; B]nN3x6g+Ԙw ? ݪ7n[6%=mJ S6_Ke~N8sz7YbO'wJž (51G3 /*Jc,we3% fze;d5|۩eZ Rݻx1, o狻|ruy _)el3QoOzWdF`_ Ԧsi?'߯@BWM?dꭆ]_سrY&C:I]o]g QT.{-F3MPg~\p63C m%3*;Tq(EXCn!Y¤*HK+rsnHZ]'C--_4FDm,tA'їM"[Z$E$Ϸt)G'ײ#-T"䊫NK;ǂDlIU&U2wo]P<Țt,$1Ƀ}ɃY/F%'1J4ƪK3cARN>'L+%.49ƙ[?q8M}8Y=*Qd "J6٤yN%ul4@"Wǖʜ+ ?ٍŨZQR. Vur/ԉD_xs]ً'^]OĀ`~u6,h|dz/JUL`>P1y%>/iAm#VT\!/Gs9daI(sa3 IR|6_ d0:|7 ^A@sJ>xƌ-zo̗ heztyzyPAQ00ll9OHQ#@$cSF=1R\wu!,練'cF诽oȅg8ӵ툗[9786* NkLy5m%Jnz?/$MP9NƉe$0 q1^&&,NydI;F1786 k@nIxJ-\D+vp,!D0 %`Ӯ0N5 :Q)k"-e&V`(n !8P腏z Zlz^֛mY9))'b Q#^l P]S9)qDE~}q޻|xf/YAȖTChzA=csJ %*䩽\R~awS  ׶[WQ |"3=̬c`m1`*o C=+L.Gz9<$8 8ȰLG$B0|pm"\-ANNv%t,$r>nJIQ3-kc45W#V{mow)f51s~G94)fhڼʤD3 'T, IPȟGF'#l*ֹ΄_WbUqT4CMm r1,ZGT[ $D2d*NM&N =\]X3.c[a|;  pgV$eTs$(Jc%QBm+ƸZҘs)aG,$6J!(*$7+]>>tj rW]TЌ,Fv?,%܆4-͎M8nwv[c][32""!dY4zU?_dU^FC0Տh1G#{}qe$Hn:#-t8#b$Jk,XŌ?11T0BH$D,e~1PAŃȍUY)awƥk9k ov -j}o?lZѶPȞAȂvB5@HS^@*Q^H:KDi6Sc))Ra9Eb*`ijJ)ҝɂvd`u_Wc{?˼*^gd' _`ϫ;],}~WO{!J Kga"fO 8砗7^BqrI-Tn~~r%boT7] @n3"n*$л#o *dH"SBVIU(=p5a"NQ6B+֪/'p*7Aw; sn| 4i~+rufEsߖ 5 ;%,bY oZP-+_neqgq-ǨR[#B}[5Ϳ ><Q#UDeAj: dRϕQ3U8T.E.E94UpkrO$/߂Nu(!s;0= % NL"VЍ2o~׶A&q";nv~=١(޽ag{*,䍛hM:Nx7 =_[nN3xb/i!ۻyz*,䍛hMiy׻ - }Fޤ8Ո7xz*,䍛MiO.ܭw2?'9Yg1鍧npyamL壙^їnn2| yP ^C1Cxο?Uok {%J {|Mr ['jZZ*Co}K}IJyv)rWmQ\{R UX0mmVǓu0A`R%ߖIA aG5>6o?,"RTN9=]! G2`R& ngf?Ëxt)tau(ړ=Z'-z`w Q4]+AFr|pP0RJ7 {̦H/-wh|H8jy 8BBT}ìvw#LBJ2j" c؆a)|Py0s,RT0e[}|;!j)^$;veE3I\NV*"Myd1"aDceMKF<^/Fn޵57n#뿢nN-e/˙%[}8r-.3@BI$E{T3x$D 4}k1$R0Ӕ,5Ғ&0P$qbH)1KN`D0E\cLiA[ Ɨv(̀Fi%v5P(̡yJXLƜDu*Us 0ʆlu0N gXJaO׳`gv#F8 3V0,G lAyA H!t+椑*D\PȡBsB2a-W‹YJYɰ(`n kpsJas |h"S6,l>]6̠3"4BȈwyzCNJ^{S*w̐Sy>Bn<=\0a"^7Z*rQ0?n!nX%\79H0/cAxNEO!op(DQ.#5^KuS~]?h+-%oڭiת娷׋awn"AY_֌ R>MJYK$虊X;EҠ^㶡 ꂣY-iM2Br{w4k&zQ !wH1fxQw_U5 Mo|)zI7KK F;ʿ\LݵʿJi`|K4hMQ{XRE JP \N;|*B0ixttkb|K]:Pc &۩j>_X\GQY<`/sy‰a JlC)5~^Z)N.^=e?nUy=j,SpfkyA>qIW\k]ď]0_n4'DobX駝,zXltyY֕" 5Z&ϓNx&"2(E*Eo~ꘕFHqh#}B:[/!3n7  GW"Df3DjYB6*CY1FTc)N2bhШT)+ޱ 0@c%b(<漱1 `:i ,* YW똄 4[V ʛL2)q DoR]{ӏ)_L'%]%/Wd >iK, .zyjJU2x骩:+I=ͰZ̪MLċAQY+**`iݳg?G??>.gV_ *|V 4Gfx. I~|cG|7X~vyDbs^ A^Ie9ğKϛ J0BJ0@H0ăn2 6tr@xZ|7*Xa#I Ѳ6p=npoN$Kt5ȫ|{AK ;2N# 5wտq|~4|yi{-KN"{|ż`tamK0-{#8Ct- `~ʻ7r)zs_7 Qx0P?t̅#l1bh3^=׽v| U.pCi4{$*I% 휵9ן2P60k$X4`:ju,i☓X+DK9D4 &A")PS%:,k$LW^IIp͚sZ,!KѤP1lh>~ )HgR ~̌~g*9lBШyR,h7p# ?M )!~>X\#mQQjdz'*Yjq$`?41 s?~cW{W%]{ yONq,&i O %|\cߏ45K̥ ΢k{4X\N/gBx4\Di҈Xq#|i,it{SSqxi{Nנ)9q팦60l/cgz Tƻvzh x W2qB{L'L ttrIzߝ2y4Lna8f҇Al,lrffŹKL]i^q_qX1YywUNm{LfWCf^= `›W_ߚYe1)nV9[P-B"ï- B*mXxy^_4p19Ke2JÄ?l-K'[h#hFMFo"=`kOXP#H Hpml,ECDf|#4k 4Nȴpn/ n/%¸+~,%;TDrftz0jgAHc Rpe\}[wFHhT}tg{m$#B4Ƞ6[K ܉*A>K2 RibU!. ^$ce(j`՟ӯާMOfJhE&ѩ{PίiKwDmGo-Y ď9J̡ٔrRޯwC-nߙċL|X=?vUsH);.h/(Rr<,4\R[,`0] !Dِ}oyFq}f̙RHԫ.7 0wz|MnÏÏfH#As9}38!~6P > í5!!NU^"bRH"j5cd[`>,X.ȶP펃z1avaAfA".Q[ D\0z*kkąCBMauawl0L^dgnx9L7)f q@ >wyB8KQ D;sIܿ;St>lm_ ''֒><ipO3IЄgk1􃄋 BR[ql֣-kη|4(0~btj}4(c]S5f#T l -8,%˘0$l+u?`o5Vl-KY|{`TtAVr` !NB7fF1PxDZ^lXk:QrX%&26"B0P=/=xC 8)ʧUF@yOpOrKW=~!>qlR)CIXROȯ߯o޽,bi;yy*i@ާζMZ( Rʽ 20%\1)' ,H{Yҽ>ajsndzig Z}AE6}]e9wTPs8 Sro<.sC#,J(Fڨg;6W?g,_9lĄ$E;Nc{Qpos 8W;rn0Ķq*9b-Uu%6]Rc<⼣BWA1./||ͱGҞJfx#WI$A!K't$fRFn"C8vXNq%dX!,Pis3` RBm=@ ̄NM L4=vi'$& 4<bq ,e0DTiaV` )j6[J4Ŝ9WW- p:Jh&*`dcB0UG/pz(a:r\Z@LrF;BW'S!oT8biHXPvG*;yQPk/]PnzZ/oPlB̀ o]#dH^_(XIp:7i.OZ,-?a{Q>I~m:g LJaUmrSh$Ƌ IepVIwڕO /5QQC=M0('uxC!#$Com1,iSB䕙XRwM_m7?޷qA4Hn4| D쥤|O%ԑ:"DR>$?^/jKUn&}dd!&mQg#8U7;Fn`86v糮vd5Wj9[-?я">\GOq<,`DT}6n0 z^-w5;TG4z2+E!-.)Jݻ={XSJH-3eJˈD8b\,P!*R*XM '֜vڼ|3k3f8a42GqE$Zhǒ.O_Ko#8_԰۝wF4|n)}}&n8rc炊0($h=I @yX|;x iZa r¬2(Bj1FS,δeژ˖%fi[9Δޓ6r$+^;ȣ~Y,vw *n,m/Y"%JY̪") ;"###jѳzp׸hϛ..7 п;^\^x:YU=B0}g*)p|{bu=xt\<1<+yʜ$Ap{IH>/7yCkk_o_7}5|a;+`}BPӿ`7TVNRZsC<rSk+".hZs$Pu- 5Q9l7:j=Э>߆INXdU\RAJef/{ua$( "(B6P !$'-I)3-mRJ_ JNiKyR*$a\6ԔIJO[JI)/A}  Q$+k<>'P{NRzRJuҔңR󤴁ZH=IiK)yRy#L6BORyR*V9VCJ<)m18̬^ǹuR3kg鴥4/qzTgfwORzR(4/uΓ '<G!Ii5TuRiK3khUrws̉?j`Ao+uSǦl%:8RG<4Em쏃׍wނ7#1":=\RwNV*jRYJ(+SHizC#]juy-#Q7jןEXc7.zE5)zCԳ.=y[`0Ri@Ǥ>]ycjѤ+1E6X`@}:N92'oA3ſMHΔ3idsj`FF 12}Qfy4Bѣ8j NG'}$2SJ圡Ӡ6|d(=Ejƃ-`ug\G%^k"."yhw #[7z I9 A)ǐ2w`~mĒHLݜlj&sND{,ՎO>% B=ysJ';opA>HC8048A<>RPB83CMB":t5M/6ۚf;^~3yYArޢ+93)ޓ5ƀ 9o).92=7gp"4veU u>^,Ӷ;~lo+.3Hc/yDGxC%$oO} ˎ}y##!|nܽ'ֳ,iv_.jVT4HCMDAñߧl>hywnBn45:Y"lԺߡɵn=H\D)B'xU:}U.8}w$=_nf׷7Su,u,j%KzmU&j |kFKf$V1.FE8S{&@ ځ$m?'iJ=4bέE!}FF0%[*{8.bI{(x,gX"hXX6SĔvЪ.nXxh%haV^S)M!PL-W(w~|I7|VG-ȵMȝ;[߂|s ŏW_?9 a` +-qmٜEy  c5*%g` 'VRi~\')ª˺ӏtkNbv'[d ;{eX$eX$eX$eXaw{q]jFB Ju2rUP%#)@`Hٿ6h~j7{K.~.HcXX} ۇx-02o\e&T1s[EJAIW^:Tf@$,2SKk$ 7:@ZH , v濕PZ*)=[^ofhaaCo]؊i-l"b_w_C0XQrzylAۂ >Fau<@I.M?Aw}ZݞinOM:1Ikkg][==: k]8ʄx}jËWj_ 9^d uz-k$ᬘ8˃/GiNDT~V&cVg)&r,9 ^"-qd?d D)~Y*q\up)|Y>*: YS\zie4l Jzcy^ 2>G) t!_ TwPGq08;ۅY\!yjj79x4sw["FL"u+r9Lե?[a~m/Ƨ_t}c%phEpreC?h=O(I ~,8=_}smZsw_Ss{P12,rXkܟOZeβ^3@V[i">/?H}&rjtC4ΩGMqeu~2@ ː5&.87ED$e}/6h5޽-f@'3 *ع+(YZJHZ9Q1p?i=sv-@0w !rWi:w[q7;i-USu|R6U`t?FFBJ~ EbTVy`Pb!VW/c夺|XՀjsI9d7kE549q3`83}f&1QT:وxf:"+d$͚> #D9I⎐ N!Š CgB~xeWsڛ?pA)5=Nh*);ۜJcq4/r/r>l"5q(a\ ܃a  (ϸ Ng*]h`W{:BpqTmDgBSEr2)MЊzf.,EEh_i+*Di !# 1B" ǐt€ "[QK*RC]-U`THU`ThWÈ`VŘ&ghdBQ!{fz!=qY_v{p0Y'93$هMuڈ۶''X?E&۲M,LYX"X2Cm`o+7)3ЄbSMdu`8/{J;ajemGtw?PP0{uǯJ>-&W#Ш ˃we-8C}87I03g0BL؃l[tfBtmqlU\r=ەZ,_>:7FQ`U&s5zV$ZShǂibhF.~ꑑ =Jȡ=ըyM94HЦHb8N #ʊ &Q9CU(WчADmAqd?x ծ')-bc,hE4gbEI).DF`BƠPZB5*%IQWXFWU2֒Dž$Vp2x*1A5c7 SQ,#\i4$ 1hj.Li\[بAN-uS Q$|DXƵThdvt"hmR Pth5EZ 5l@AQ䎢Vڂ@БKZ7qu:BGҋo "@\jKSZronoN<̆lgƚ4ܠ$"lh͓,Uvp{D h{ᒒA;QIXL:=[`SsIla<~ cNax#CM0J :[g7~*[!L5Xgg:1*)ut8SCɋHɷgoU`JnNb^%oLD_&HUlDcw|)YۅzS-83nвzvQ_IS9V-qe#˄9A+5NUrh/ӯ'a7HU5 V4*_zy*(z-+^8F`M^ʊr1gFʐ(e.ޯxO~9TТ.&I.lH}hB+;ʍczox-qQ?{YrUS ,ψq2$nv<#IK-5/NtF_N,#8.} G#^'Ij1IX Cah4A rfKӅfb-r6Yӯ'Ϻ %lTbjnt!d+:Xu͑>&c^ˉm}v`sLք6D:1mʲ;X)&KvW"D 9ViqKkbYWXhٸq6+tUM!Sԫgyϳx Mw!t["Ԗ^⾩98WP W\x벳miQFߛSז>oU*φ MBJHX`iRȓ%?⻷OmVǩ)YqoRe.7ǫ~ߤ<(î^WKނ4S! ԂzԹpe-ׄUgo.I,Փ;# ]7^PfSTP- Kvn)՜˪]~{e׺ݝN @v1SWsPTR ..?YZ ʐ$ŊFD"[J;& P'p)T q7)=1vGoҟ~jAMQJu ß(㘭K yjeZBޮ{Ac8yG| Zuj0mhG촭{swo~\|JAwڿH O+qDΌlu6!7\'ZLa(p8ZaB45 $Ā>|ࣶfRMЉQqJz6FAT0hwPTx'APn%/r4W(A#nPqd-_٢p6#ť! efiz%v+ĽP#r)r~ mQ@Ee"EN:Q !xBԮp*02ք&n7!"S}/t_OQ}@&m:.<@L"HP_IA8"jK1AL.ID\Cop\FTQI:ĔW30# XȄ&*ijW)AбAUz+(R=s"kM”a لhk6geAcRdfE] u MHKAjXSڣh i8ڥ Td5ʵ4_Jķ_I_L`2#ײ#.o&ѼW9M>t;<N<4 T#q#YHɆݟ?Vo$s\2wtZacY:53δXn3 \v3.X<PW (EЈ)S*ETmLuk;y:>kc #1Mc0#й`"$ D1Ȥh!5ICvҧlDr|}dG s-A *tE{u@6D/Ƈ%SqpؿɎdV,e>0'u6-Ots-~l62}~x A\9Q$%AQa5, #iӶ#;˒/5*+c5<AmHJ沫hښ$ yX)ؒwM>JW͢XhMKK RO6;j.-s-fecy1gZ+f7!#ňi*r!rbA I!9%,"$jJ|J+uTLk2fL^mķgX%6Fք_+{!&1rD~D(Ɓ, du0d1oW.uQ;zjpn`MlC>ܤ`ܾ㾹Xu_nzݐ6N6JE)./IWB1Pu N8Fhi2n)@oek?>Q&'usOe3S\\"Q5 %>)=<-dkxb¥7&WjDTI|mPh#;?%Я\X23о軹({ S{>TKmt,Ai׺pؽ jWWkAQ !4ݽ] J!At+SEKK";\GJ"q˷s;2H*T=)'dM;y|POB>XlЏʾs-#>Ͽ3 ,ߣ$Fs5d~8&~a3%v@w9e49.9d18s.e\SYߧt ˟İsL6ϩ"O,?승 -[ 3~O7b2 ASށSA N%!!RsiAD 2bʼn"NY¥PATRhch 8:_r4>Ir}7lO$6P[Vnո oZZa`Rϱ|ccozUZ_PNLSKZ@ OO?F&zkk"F ЦT.LRm|1pcL "w~RwcMEKn}ʾvҒDB_ \DǷ2zH*uk7TzGVq)Zݴmnoeo&wxSvQU!!o\Dd*=FK/.GvC^uk7j*$䍋L Jv T{4M3d]X|W:pu-pOß}*gc *v/O`&Ԥz?ҙrP;'z?ҙ`鞟~B'T:$'{~~B'T8%";/ eӲ16cRBBòY67'I`e:M9v2oW]b#3*r/$ZbLm|y%nq6ٻ:r*BPEe#s-&Ei3.)8ٜԚL wȲL xt.*+!hF@*;jk˵'SEjQꏫɮPTAXA{&GV9CnuD埉(>uW1.Eǒ0x'ğ]H9-_WJs i& ?11?2*C)8B:g~CG<1(+Xf2%Ȕlhgޱ] g  MPsBdEҨX{4gv7ws%;zNmzۢQBNd{?tp2oYT8s`!F_yI߹-Sr'lO aG) WpFZ!u%D-mJT){ޝdo2UJ׼ u0B}qYЈ;FM$Ǣ X"C)m!B{xT^c8y's"x&luMQT]%b*O3ʡWndU҆$'epl}A#55  ֔AŸRŰh ΗSslę꒳!y gV'Ig*EL }!{5ĕKW<+"hRvԤ8 AιB5C>. Í>D S.@=T !@ie5K##u>9$K\ [=QV8Jj̨Xu)ޓ (\ Sgx{8)ޟoryz&9qtbӯt.‡_)߭S[|Y>,'sߊ@I%4TSCʕf0? C*",#+cCٵ&uu;/5_)?ӓCa=&$7worX^,/bF_=f;u+O͛`7Uu2x\;Dye{-Ou'SMIS2H1*9Xd@cd@H `#rSRΰ\Ɓb12$:6FS<ȬÖ*jt%AHnybA8Ց}.,,=JPJ}hCS}kV'6/2(!婥8e{q M2X;7ͻ8v>Mq'r(c7?v$YUW8_FXLJEQ-ݽ~=iE]оQ=RL흘;.{Oy\ni/w곻z^{z [BU Dʽ4i*Cc?c|"NaF 08摴+I$!BXm#cт1 ^eL1A& F a eBM\(2ذi[:oz uħy]*3H8x`r3PKmwiI'3]NO9o# O1t6ҐHFʗFP)bP)n^lo@c``٘`ʉHYI)9eR.75 `A$~t6߱fR9|(ñ;P&//.*OҖOv%4c@o|V1+5YѮ< mk]~ 4^-CP]j@ˮ n6Zo<,rzQ㬷@DxAb)]5LjR{kJϘ+ M~oLfkˌEWiU |i^w;}=Z;䭰?fNכQԜ΅XdumIYTlaUŸApT92An3| =8l~-QPNUGϑUcz\,U}Ȝ#AA֞sD(93爻De߉vΑCy 6,մaߦɤ gh!m|3}ķPzYe*šb ).bl/WQn2K9+/}`+hpP˸}wNaQ^6n^5=٨RFٳQ7=[ uN=wP1>u*p˶#-l >O[;%Nxpr ͙KZ2{jẗ́aʭK8TW,oQg56_Q?H][[LS\U)?yav/iwSk&LcI.\.3 zׄ3[xHխ/F'ƣ+tODaIef˚ge .l&SsIvE'js?$jZ}Z$iV4 "NJ}*rcּ†Q ҇:?ӨpYw4D{IimDh7{Q i\kfGm@5@V O+5Q1\Q$uvQ8mYĕB;.%7jPY6v wKUrL!(TZJPPtvI_;IWBJ "a[9Rkp4lq~HvRVN&'((Al$s5'q&yC q%"M?x5S%u#ذ#zXa%䒶 3: Y76 [1N}aI?&*PʎEɡ Oj2ăjcFۮ 40լ \Id\ |"u7òِ=:XC ֌V_C]kҘ!y6/LL&!JT=jaXa Å,ȅ#i:O1T\rNd%RS/m$ߙp6.m|O^%R-SN‹/\ eؾt.Ed9oDDŽX 1 L&E(JXK i" B oh+1ba4V̵]&$>X~40bi`fw\L(R2Yr݆m aT)im(Ŵ4|'g~wˍLsxX1̌E^?6U@jwĮ> yn ų0Lwn浆fZ^<c&M/e7k|,/ }bJPkihF%apnBL!shҀw-%L&.vyks p1K%Z{N@_pvS{ ~}$֑|Hu$fI>,'s?@84 űXXPec- Bhҝٵ&\.%pOfN.=~m6Њwz{y7*dg0Lqa"8>tG* |~̐;MV0uL$Na5{Ebn0bx#HPEm!FRÊ&DqID("ϧNlu\o |;{)}b<{|pƼg}X/ 'o ;Dʀ^TE}̿=35ٖ:GQtA>HvJ}W"+rF rɃPzߍ_^M']_Me v}gn< j:ӛwBU~r)w z}# ㇚(MbNM,"]#傴򞺒fyhH;l k6]Spr6~x~yfoh`"[p6M&?a'7XG7)}'E mi5awJnӝKq;$ʕF%jpsß+I&~tPViZ>X~0|9 d&Jdsz-vعן|ДbB>اG=͆cO[&_K=y_xqҩ/7HԺ'o*8(Oa3̾0`VͲMZSqi q)Kfamj[|M- ^#lP8P8 ;y`?0)]m \ Nj nnF~HE`)] zrGc{X?]G:)/5d$Akňd!8o^ 76nK~wK1w?66{z zZ¦HX^a_"YoN}%lI ] LT}f!j#r׺d8-Y Z` Qަ;9~ 7%9ur1xv s`L$c UTcK2B\<M!D >D4QqOR(/ ЎoPLv}Ld޿QO0T hěs+HC`>e xsϙ6 wF1apr\ JIU] Rr04׹%p,RY`91gXI%QeQM6D`Ni.keͧnxHGx:(f"sb""3y^ė)-ֺg̹o>ű@1yv%Z8)YH P*S  e沢4c6ɭcHEd7&P:Y_{1yb 8ظB L|{3&KЂPWUZ<8kHY,6+$j) Efv ۡ\+9c8BE>p#" 1؍ZHހFVbayS@ m(|uroB+9 SJ) < qw|'s 𹄦Zu{([!qCuxpmg]c#6-nZL_nqtD!$F0pk(a-Tx_;HT/./1 CdHW"0 VÝa$FIKi%FF;#qtRNFCxNjB7F6ةsCSbԺh1sӣ]q,-J7}B#鳰Rm )!HpdO5crJO FZ=j~.5jj| 4RyOv֝Ah;@&4䍫hN Qڡ'ٴn ȃ oD;L Q݁;&7;6!o\Et ?ܰn?KAT7z"8mZ?Kn[ ?k@C޸T`; %BիR'tVhibL 81x~8CT-JjB?L $L@zXզ^7l$>n"6PPL6P;CE6 + 1ixK/_ZBkS.Te$慯0h2+rM6Zsftq`́ $\20 &/v3oW 1载bo 2Cyxb%&+Jo$dM2Pr"|{o3"J:wOD4z-[)qP=pTr pt\^ȜR&%1)Lڔ$9!gb PǤfTWL"S%;NeIa GJ)Pg+7vZ\t"+U2nSF55)/]?`ںjfj6ثubR͂s?dhm$2 .aS$EO4V$'ڡH_LKCF >0}"Nqw{'a~vנ[4Uؕc G`p3#dƘLnW]D!>xՂXT iAyEdjYt軎ï j7qi 5ݳU VbhHoׄi!:¤>۫XٰZ7׷iPd(o>Jǟ 풸d$L&L8j8IzHuIzRP!t;;wӻZM ǐjJf)Q8+UyA4+iV5JY^HYa2bA0H^0F:=ZqEtɑ*Qje2υ (G^eQ|w3FԤJg27S _oמ~nj=gs-k1Et?n_ji]|;1/*[#&9i[»65]Kbϥs_PFL^(T 7ڌW\e4hD@ HPL)(pKNgʕAf ѷBB+]ʍSm"JvE='W.`OL(tp/EBx)xU]AD$\1KDoO`J.; :x!Hh}D@jGݭ/F!ZpSLO8' 6ט[X2LЃ[Xj@Ի93 9(Egc]naf`qILlj,XW1c>T#iAȰcƇ v?D6C=&"[k kgȎc\8pZz#8iIB|lAx <s)n'( *ID8׵KǻX&``wa86l!J3!:Ez޹NɂhN:q%@7PqiTCNUqqyyz(\抚py#P\4䍫hNI:xOan<QoX/A[:nh֭ y*S>ܴnPCѺ DuRc"Pӛ[UtM9L@t0aG`CްF@NhAe/ﲇʄ_5؎K_o1J [/TJQH!O)Nd#gIA0yV*ܚ7oy%nMVT8$\c27sJ]Tfvm͘K s2WcQll$A2D!oغxA?7!pc#FDF MiKm!hf mVyDE5X]@dXEV/Ucgeki(r~0kBIpFŠUKi,P> %%dCJ-(y˳QL6·%D \1^..>ym_bzxkW/3 Qh7^x%:17”cUH>\k6fPj 7| w7|$ ↏~RD3,%V s÷<,%Y9 K/UYI_t~?ǣ'4 ?/7W-|Qm4{[ {|or^΍<'`Wξ߮^#Ae=oM2p k]I}Ա,|sqQU;ň:J@5&@Gv&3;5%66㟶rx[Zqէr2zғRPy|f =~E5AuZzzZj.Ik)C8/OJRôV2zZJuVT鴵A2p.)r1>DXe Vo'jXaiBzdˤb2O`+(/wn;9ZG;eOO7wfOK"ZsQ_")BpPjyBIYĦXD oదiBib -w 0Z -RY ZpLorNrI[ل۠1# w޸߽l&ʘ H6y^X16pcGr,pjϼ]|l6T݉hݛu84rv|[݉8 m^~bOaTo_[v$kIg*:G9۫@Hša|ʧ:μ! #mmb3r8$( 2Ik5$=^% NK}V9)Sr`YuiJFJ JɥK銌+ܮw,Т;Ez.wrx'Íi$/.r &Җgzȑ_rvfe/aY` &>`X֑d2lvfEE`Ʊb}"Yd}\:ǖa (X8ޅ"F ^2*3ةq1o{_oz&HZ7FeyrnٚgH =ymLCVbJ0 $J@ 3 'M:á ԕN[OpWM@FXۣԘ1F +uF9C1)$n y"ah!@7( R$,+ԖG~$)&V0 KG郯(8Ze}j[xIx-j$jR뱚;!Kyy ]ɡc}ZQd(tKAWgJbT[o'gn2j' >~th:/GiwF' ?}{p{v%tӠ`fG~hȱ'NkCn)aKmpw;yΩ2ᙲx} 7yax(J Zu#HbBo(Rur+5׃4UM@-C#5䕜 9B'쪷Yη\CkZ86$S/j=aRRdmΖgĨaܘj{plGsSyG'=T3Ǖz;huwۏs3]rۆ~TJtԨr$VN "G'-;wT7AP=bNS_C'@h#^J͇<l8gG5FU+sCp|Zi``T mJYk/ղȧۦGwҲ *whMۏGozlk۟kOdzoZ/0 ~W_{zp+; |"'e]L2ñNl˜ ,[X)y&$!yZ6$uI.TEsvYz5C11"Uڙ(:y.Qw᷊u[hq 4rPFd/l;8^Vfo۶2uaIfSB$gF~>[{ "r[ۢ񭚄!W? g{1p9Q'>yď_}i0}][ԗM]%Y2+)Y&iz;!! ķzR '[J"-l*F 7Ee|D)©Y]O$G0 l41%Kǐp&!$@T}.w !ep v !"%LjH0ZRCxu{B`lNa$G8W0-XB `+"u zź @.0QlTXsa!%TkJ-T@>aG)-`|bJmo)[ASy IbGR>Yw־lwABQ6x(:'Ƅ=#_jq3!Hם:+< &2hvVs-몀4%&&Hbɐ$Q;ɃCs#K._~W_6C5:y&u )j}0C؎)B;KHpA*Ǡ9}/ifWgՒU'ld jUg/q<~N=8 -\l;a[m:l EĘɀrb[q)!*(٪U ΢` 9UF[ BꬄXCh-J& PM|'ě=fN,=$0u*Wgi L8$.;%D{-wJv^;Yw{ZWH9[ V p?%6{'+=F+ HŬlilVH͉As0Jsm.zܮi0A0ʅچ*`}kW'|} bY&CX4|d]r1]1|ȍSjz ;{ B:E k/I`lͬ fGk iU?LZ^EguK>Y}[:EiWx5W"`w9n[E*4AD" 6sQѾK+-M|Ҩ%Lr}̶jRgǏҖT'!guMj[PK)ǭaetlHZ a`JCJDsr d Vl9F92U.\`Pԩhå&nȶHI%U\f&- p6~eMKg#Փ%\! xR݌7Yݍ:Ty }}knZż@8v{l6tw&n Gspֈ֢`=H%* I ] tp9$(4*uZD !yJ j5~4b)QpJRo%%z*d=C]fGfNX=ߗuR sʞ=;}{KWD *SY..ܤ/+钵D @ [h*n0W_p+ytP=Э6ӑvTE8L$e)K_LBgT>U ]!ysI`lKVw h{׺v:)ԱQ(£ޅm? ]eVَ^c8Hqm-$D`^(XAW %h\h[#GQ{a(ըr]sApW+VE׷~≽ws#j_ 0ra-!Nހ?iP8p-s;􅧠Aܸ/~"T/0P]8kupv2H@!IZ컏CNC)ޚ*jj9W!MEN1X-J fqWAm +^}N: C`v`O=;Ίz1pOm*'V)NgUMU}Szt"a)nO.Nܻ.HS ,I91RTP*61+i&dp?ܔ?,|AԚ,U58;Q~|b?_{(h%u.@N(CJgVrd18Fn`@: i’CmTrVh"WB.4d*)%Y5!"f+ܣL.%C!XA()1 VfYWR#|x~L#ϐ|5:g|O}0: .jv-nZo~[0LM鵼˗=ibKcIiח?K;Y^mZg<6+a dJ@1.g1H~?K;AguY`W#0is7V;C!UbJIr%ω1HܰCm,!ы!WR۝1V*)•‘"}U"\i?)y`GwٍYRА1js`9*ReGVk$wq4Pst<`G 7T Ĩn',@I#j 0 QaH*CX%RJIPyFscHn/-I bRR%,aDpȹuֺp`*CD0%T wIgNcX\H% k]AИ@17j_ΥkOx]ϧ.<6zzP<,b;ږx@  >5BWoU[ʇ\NMهѭ_ N|#2jO|plDxgS\~gf^Nu߹_*뷇; E6N?aJ$9%뛷}RC_Y/C3 *IsfXgj 3]EDp-ji5&R%6޵5˩ʌpo@eR'9qM^6dfeR(ۻRpHJ𢋓Zn4yuş+.2ʼnnuXXK{|n&Bu9KՉrP,pCwf Sbc@ׇnIIA:g=|m%mPjϞ=c$+ǝK iToSW-}ZJeRYFcji^f>=W)wj\#s D&PXFyc1:Kocāu,Hj\YSsbTտ璣h4W g?U;^>vw;'Qb#sRp@$|; H&E6cqoZ6YВ1peڃ8$ ?Es4yem'/-p}|P#-xڂ}ڰǹ^ V7Ihu)_*z:WJc؁j_Q HsʔB5tO%І @)5֙꒦~Pjb mjeB.DFg"f̻ Pǁȸoi~[1ZWP C&W+øjz?}c==%BMDX<n**U2D,u8M&e#S_Z#qdДT*P3ޚlDVVUwDB%hTPʢ+!y<<]ezO|-FSed,cgSi66JG,gQf =DEH`h0SC4FCž摗Dc R%&0r k]SiA#d#f R 6%'(Mڣ)R˼2ɥ 94jm<|Vi2;H4AR˖3>$wQ._ڎ. q\W\)h~^L,IOL#q:I엢ْ|QIr3èqz3]>cnqGU)HQz؅ћs|dD;ϊT@vET FSU@wqbV'ZcWy=$x隤ݩVeU70׌wt]+^Yowi1$߲YfDEVc:L0i#mx~|>ö|ij6+a$k؃<.;AsM0 _d~l>qȉl[^dKfnS =O<|u9pys1,>27-F'{H3Iž55*O\xHw\ɐhfZ @ę[Z0b'kL'>,Yw!Z*tDzSOћ9cB:)'f[~MP6;_~Io%>7˜F:ua$~d4qGG6O+d6 ) }ߑ]k뭾S սiw0&QӨ ĶW.}/ۑ/Aڝj'ޯQ.ԞG:hoGq64=al y*K 9=zͺj)ѺA}Gv<+ɴ;oֿ0Ӻ !/\E8׭iMGn}y:}ź3Dwe#Ӻ !/\Ew)t3+4L[\CmUrFVEJ\iJG}W&>}wGT7Aff25,'+Uoཔ ;s|_`~݇@Q]_/&##Ma]$t,oܮHQ2q@>u=cPtBd $:). .{o'OAI^/?LrUIJu<)~ʯc0(D W N[BGXb15V'a E[y4zahcZ ^RZf?~yK; SLk2VkXRL&s$Pc(M &!v]}-̶LRjC5}1FNyw "*SYMuDL\ : *5(B.* 'M~BP9RYA$E:"-Ϳ,]5k(FW/g[4ƫ Bh r(<,<  BTo,;%ٻqPeZp hfFSęBXٍmm 78FȡA˜K\áa↜[}}yq7 X\y;X4+ nȱn?*#/=ۇ .)Sd'y鱖a #Ƹ|j @:JK1ݚd->71\\7(G v8) z@s +C +s<aweoL1G;vn;%388ޑ}ܑzGԲޒGs~dGǃS-)gNv WC^a%װzt7;S8yC~jN|/r 6`#L!4䅫h/5릉: ฾WJ9E+bޫF Jfd[џ_B\  .{߶ ~G\Q?4_QG5}LhFO6 xsL4vyۃɆ6s62`Cq#MriABUp > [8#ݜ),d&^Tzm{ULd-yU)Δ.%Mm7%PK fI\՝izؑjq+}sܰ>T{Y]AT* 坕"JS\N. VkV0F=FW1-|FJ5yХ`b(hԛyqI䟓'.̿f&}TPrPΝjK|@|<$6qb-Y(S(ݧ.u[ }D !!`ȳo ݤ 8Ш#R,#\]C_szun.~._E^ljS\oxwUټF.CL@8Cp3\GB_tMŋB݃xɀ=zL-jAP@/J-T[U*e x9\^p{1dq[ 9oƳڧ BvO:WgIh;@xɒ`I"&,@ N*N20 oW˔# 'RHʸ9ۅZP|%eoSC$$4wo0a2Ww_]F a1}XEffR![g0uErAQ0[E%fYx Ԕi@A c9cի(#^OLucn 3L }2M].螺]@=dy&:OYUGRƅI̓[dZSJ*D&PXFyc,ſ!.B])m6Sҩer-5\^2z߈J'b%6,V%EHK*SAvZJ>lەQ5ToŷcwX>v+_NY*L"XcT"GWb)T0U$:i%ȤJ<=XQњJBM jPVVJqVJVJCԒѰ 0\  {蓛s7溸p?k_b5c Ob9pbض}A Iwc^Q+0"&`nd Wl+Ym5B4; i" іw<qIOqfN>eI. u{VMrM:~糗G+G:~>FPDҗWS|>/X5{>Ud; d`c5@o;ɰ}I)AEFpB # \5J d4jŠ@dPd ylPmé=<0kQBBGOL-wUHORNEt΄1}HQϣs"rolDU )TDWD2-d树U,Tj $^L0kֺ\ BK0AIWyåRTn T(l!ԡh<CUaO7let6m77{W~觖:7ScP\AZSc۩A= vju_vjexڝ4/O.lVo8h ]/LhZNm!N%%gp<ʨ:Sӑ_pZ`*RT2<\&cĥE[0x+V*S*E=X[əuBGzO|Љ%< ap5 Zɔ2c5ZH yU+A]FBP`9~&\}بA0اnYSLMdnR)'@q;T}O84lܳ$BiSE]=ڨRR Cjy&QZ?j%0Q鶧{nIfgSɝhQ[F<~s̷S*0E?`;G.7R(~|t,C|s. jDt<곒K(i Q`o./_<^'B57෉hџKV *E[<:WtTb-?{Vk_R HkfCj2ٝMpH,۩4H]e)٩5:_7h=7CkZϔ8emt#msZ;?e+ctV8nUZ@v*r?c[ө6=ǴwQE][>P? M¢G?VQÿ!'EɓIQr ރgQ照(cOY#7Y89y-w ܆~j[ JJ~~n[c>ʻÚ9_>(CUQ0V*͛@{@`P5@mn<5+8Ε|)>=-ύe{Co{wpl.&u^)u>2+Po6\ z@.mE(Q)C1*;C~nDY >_jUh"33M'Hm#[i5wF+m¹_R͡W?mp499 :@, sJdѶǡz+D u#FN["dV. sZCdH"h2N'ePF<2QWBݗFV0%tRRUXݲ'ZF] ^6RgfZߋGr\`U]6w{yCvytk"*g@s –Kچ~yi:TUv}EйAs$r (l)  qn~y׬3qͪ57_V0~Rn8#,#.%}[.OSyn4Q\[~3цR[UcrjZ8JrD9Wz׃݃ yz($;b6s{FG}Q;!v/"ӝi`$:'lsa:]?EGǿjz1췿EV\x@MrCtJԼN'G%EJ%SٗY?&] RI8J^aJJr)s O/vՌPrf}.vDubP*,(7 O`n9ʫTT˳-)J"phx#^H@pcts+}wvEL.Gǁo| 2-΢n? 5TU,9+ى}*?[8r#%-HxKyý}%O'EɓIW*ɰ4*@ 4 ,c +s^pCnKV>p΢љWWAU4~̩jDM/}q(?M5CUZQͯ|V* HYPxILGʬH;@{_N3v3KCQ0&iEiWSW3RkE|Ok Gaa,S 8aO$ý9G jO W-d:pȷ$QCY3z랩*=Be^^Vgfm>ߍVY-\Y'=;T FcKN̐ZuϊJve(L-͊3yլ,ύP٦-.eaNrU2Ktgϲnt3jpRcZ:&+\gB'>J 3d&b(h?I Li9#ZAPPS5J%s|\Pv*!tM+һ*pJGeQQ3&șWDW8%4.qjq1s{}[W3jolZH5<"}\NuL#2.p\S!$Y?ƇV·#J5L'Dw"`F ʎ$V @p@-*xgFjP6$I@g ʹe|Ъb^$śkd7HfzɥV3;BD h*2PHae4pɄ4-,Ɖc8EbmD&꘷[Kǝ7]A<_Fguۣ/ӌDI@Ԇ &E[`@`nOz}2sآy c2mb"r YnDaP<8XiUBcsdȨ-4L%mMI0TRL#cIj=Մ[d9(()D䉧 Z7P]j궕O?:fD}#5ٖ $ZL3qL)qe@PdjIU"cgN *4Eßqq7/<NMմQ3"$z+t$l'>TFsPCŽgw;[ x!Rm…kIЖ@dR !aچ}I2 0/i3DlDXdgIҔg 0hV]&D⢫\8L!dN@QK^;^?Gqhi~1#Hmy6wy"M eI tBJ@;pQ=m J[ٌ\ױ (-#k$b}r-HdNd;P \F!>+YF2׫BNГ8{yqz)LMPT'/)wZ P*Eq@wQJ89g Ͱ:Ak9VQ!%#O(."Hʡr"=˸Y^'x*?ɧǻHm!ˌ)XSR""xyO^Mno6uԽ dye0Ȁ 'ةrI{4_*R*bDA جh\ 2 9LXis$+KnXBJK*t˜ozx\G..˹CV +NqL84R=TTL_P9)kov݃!bMBl$֫AS0~|QǠo/ٌx֋V!Ll`hpRshѕ,Jkȑv(Ж(KY3HyE6Yn}JrDxh}DjE1 AQPgI1q F0axO +m1,V1ʉҀSUW܃fJX:H͡H*[ǭ6} td48O>:HS6xp=2 \.tO(+&)qdx _p,dַB{ NƆim'E+-Jt:Y,#y7B8a:K+Ì<**$}.fV2 ڬ<|.}GF; K@j<JELh^[CSFigֶL4@禵TdS>s[L1e~5J(Dd"d@ R9S[On!/-,a6E2); AJI"FzC.'|b0c_ X_ӕc:LW:$,ѳ01 DĂ$%>(xE/DzT?6r.QDqu7]8Pq MTL`.mxΟ0g";oV⳼=7d D1l{4-*$L4+, +5?h/#xqS)7h:7LJ*+ܗ$rYf4$3K+Ugq* 3bALD w CD:B@]n70FƝ$6*mAf4pu}9AڣY$^t Dgd'n@=@)<PfYN]E9ZZ4)ق24,;D}݁@b(i&:7O gY9^k*jZ? ƸtŊ17\G}pID+[I͔ȴ+j'q`L #2*bX) $ Av(Y k5¶e&󚫙MDb۔"GK@D(Wk3`(ZsWv$2KWz1 H%;NR$AĨVZ[t [UM $*D(0)l"Zڔ=圢)P(˧&A41 ԙR0Q@2pPøա21A%/ _kY r9 Ar& M Eshcw6M_ė26E^4!+%2ٻ6%W=%@ZKuw,6 N^ jѦ$$[[E(IshS &)ΰWOumO*`ͳ ]}!|lz# ]B\7'?~:t2'Մܢ+fu l~\Jeo.p>d\7wE*(؄{%Ĝe\I`pMjow{f`Gxi{pン[A[e)< [۾ <Ԟ)e̖Vڎ KWȾfS~~~Kpg m7Q׆7\g{Ԩ``dFz`a@#9܃!~d4_6碦=6~>ۢA̦YU~RymuW$7`HE%d)d[xk@w/hڵQp|zWr*>T4GfEy'l `ŨNg =8&9K'djO/?@ȢR<5[%%x}ݔ.)'Eu #fS_T,NN `KgP W7#-+]Vnw$ Kvll?(yE2jD /F2<@FHN5 dv#V:{ey1{|2f(N@i2w;RҗrwfNTuBq#t2!%Ae8K!su4j5 qlsb:[Z09bOדI*ˌG+|o_POyytx2E.6:꒧U%b|D^H'[˾Qyg N aj_M>JK =qE㴠vI5Xou̍S9QtzԽbVH-|.QIT,W4 g R61wؼ(!{AeaTk2:Қ`Q;CL/uaMCʬt67 8yRC,kBZ|Ooj.+z͚1'z9%1<Փn"1d ܊W|֥ҊT8W+ -W0E\Y|BJ;68w3?'B.)ccM^ <Eyw,J{m ~Ε=PŐtI"1/LP]'t_{kP` VoR6{kftfHJKM`,} ='niLP?頋^ =e{ @~4Ղ`KzE4V>4'3z/eJmp/Sjvf,Nqt&4?ɭg 9x/[z rN%Q4_%š,KB*W}q(%TJT/h!4~%xmiJ$3/[ /Zqjqk3 z"-B_U$s#0 8B=rn]#5ɶn~bxpz=^fTdqdyƚА 8D5Ul| Z()saM?.Z"tPh0yE8:f^d=f)8qggm=flK f /U*@ R.:bSOViD MVd-y #( GT9ݶA=RR*".j4`k@r$$U@hvԽGɲv ]q4øPo>M0Ɠ© ~q+,A 粣Qg`fgk\Hsw~ 8 kCA0l8Cg[|FojgArX;YY)kӉ)Uќnbr}Gh닋^0'=7֮3^;]I!rV:LoͬJ2AF ɒ0< Y…5g2D\1|f^?>p@`y=}5ǮHxr[ak񠎂㎰fa`d]Ӥ Ge>(. ]a8ٯxdɱ=bMY'w?>+ZLюOh _sOѯv2oz=%ٯ,>b6 :8: 9RtᄛF+{}U6R4 1A4 MejmHFJVG@ v ~poyqߐB~eFLՅF"N?"?c8m3?mxǷ.W<7^NVm}k< Aiˬ]UC pYWup SEK7tN^FڎNKJ0YkG'H'*RJx"#A"Hݏa2j楃楃楃e`BLT2%25N\BRQ!Is H%k:n*8#-/([O ,p{ci P;N: oʖ^Zl/CzYFTi>Fl[1JJ#?D 8wT̀P)E)pPMF5ȆEP\Bu qKvu\o,ͿД1b ƦsuD]߆YTfР9MT@lW@ΓD$jeR9VU1r'F=!*Ւrr:=c[HrpWT\bϮzk@onHi/l{q^;'XoY χqM1C-W!Sy"2ז("J]% iTX6663V)$ge22V"q%dŎڶsUL?꜒ȸ˧Hw)8)AX}\=dH\M >~xa`2-pqwp]+$3+8hX/]J8HfWhȍk ZRܿG#1n<qJqI'#@1RIŎ[A)[Ԉpa$TD5G-0ծEZѝX_S%x;olP҃1ZJS-wQhT}wQn84->q?=~s|0Z&C?65oaIRAw,(w aH4u?8D{.37 "JXu%s}y=x4쬤QoKymmϮzM<sT>i:pьөP{n3;=[ym)08[7;![S@;XC"nͯyZ6!Zgna"I^.=`<*ؤfjT y|(^azbw!̈.|~jnJ%OǗIQM~(((g GR(% *(}~jJEO(=FrRPjc~Hr҂j\Pz( iD0~(- GR #"*%Yp]l'>^?ӗJ˲i,˿HP25&{kD򖛸UuH>~ƚv/0sAOSDx}QxTDFVIbEQ*[>.N95]O WY-KTj,P h *IKФ"OJ"8 7~Wn9%lwwg76WR#xzyE,oz={GtAF+wDkiT1hJ^-S+u|h4ѽ} ܤHͤ&1 j[zŕStXZ *aؿ ոj4EՁ ZFyR"P~;im~69.!~E BE;uKmAХ濧cܺu"V9$J9CjN23.A CiA,:Q `im@)?T%cF)?^)9*݅᧩WNQׇn)Uqk?rPJJCiA52N(=FC)E"J=N(=> l1\aHP4*Ւ)Qw =I!8lhVr__\)7U<+C|FywI)++7RgqAB f a6btf dBNTŗzMݗȂ7> NFEm1F-@j FJ}lc@.g]&*)o;<VgIMC=5/4"sK-((%׍4'epkt𠂂y]9L֊R<΅d ~9 ! (wsC˱w"o~WUB?ې XL [׳E e 7ɗ0%2d(L2u 82SXmLw9eXʪyIUOTSK*DS؟Bn㥖 S9WJCZTRɆJ3z^JJ'TKJF72EM#\}Y褁жN9k iY'o6&B(WDv B$R&e-|T9y(/X +sTSjDN0P\Ys'D AVB7kX7f'TCN i2ƊIP2Eip@"HjS\hf1I' u$I 0ccT'L֜KiʳLL$ A &1ҤyʕZϏdh-![6w:GW6ήn/#16}W tB {Nz!"xي藛{%_IiIEݼ19 '':!\E>g}]^xs]YY*и6Dчmu5)JwvZg Y}?h"fފथ"L{޼u){քVNo͇o 'gs?HLwu,*V'bb@2BFzLxCm{zɂGxcިFG:EMzy)5 F?,&m˧ 䯯Y Q@$惫13E_g[-2 eƸNs%oAkG_E T1U E99uCz.^Hp.q.FLǰSrb 0 8wŢʨbP&i%RUIkR"șMqȘA!1Lt=S:}uN[.@ymO! 7 ~ҷV{H~=b{}tGL1Y潺9m*ϓM(RVJ>iV>7oz~GC)Igd6V@G%O?@{(vStSZdcr:4\qNE.\>)HMƐF0D4qX9 -8G}zJ;m{*No An0"B=22IViA9Kh #  wvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005220366015140637663017714 0ustar rootrootFeb 04 11:27:29 crc systemd[1]: Starting Kubernetes Kubelet... Feb 04 11:27:29 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 11:27:29 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:30 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 04 11:27:30 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 04 11:27:31 crc kubenswrapper[4728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 04 11:27:31 crc kubenswrapper[4728]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 04 11:27:31 crc kubenswrapper[4728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 04 11:27:31 crc kubenswrapper[4728]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 04 11:27:31 crc kubenswrapper[4728]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 04 11:27:31 crc kubenswrapper[4728]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.299064 4728 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307241 4728 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307286 4728 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307297 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307308 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307318 4728 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307326 4728 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307335 4728 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307345 4728 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307357 4728 feature_gate.go:330] unrecognized feature gate: Example Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307368 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307379 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307390 4728 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307400 4728 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307411 4728 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307421 4728 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307430 4728 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307440 4728 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307450 4728 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307460 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307470 4728 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307479 4728 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307489 4728 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307514 4728 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307524 4728 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307535 4728 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307544 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307553 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307561 4728 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307569 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307577 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307584 4728 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307592 4728 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307599 4728 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307607 4728 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307615 4728 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307622 4728 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307630 4728 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307637 4728 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307646 4728 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307654 4728 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307661 4728 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307669 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307677 4728 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307684 4728 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307692 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307700 4728 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307708 4728 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307715 4728 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307723 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307730 4728 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307738 4728 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307745 4728 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307790 4728 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307800 4728 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307809 4728 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307855 4728 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307864 4728 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307871 4728 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307884 4728 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307893 4728 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307901 4728 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307911 4728 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307919 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307927 4728 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307935 4728 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307943 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307952 4728 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307961 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307968 4728 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307978 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.307989 4728 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309137 4728 flags.go:64] FLAG: --address="0.0.0.0" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309176 4728 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309198 4728 flags.go:64] FLAG: --anonymous-auth="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309214 4728 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309230 4728 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309242 4728 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309259 4728 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309273 4728 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309285 4728 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309297 4728 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309311 4728 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309325 4728 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309338 4728 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309350 4728 flags.go:64] FLAG: --cgroup-root="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309362 4728 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309374 4728 flags.go:64] FLAG: --client-ca-file="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309385 4728 flags.go:64] FLAG: --cloud-config="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309396 4728 flags.go:64] FLAG: --cloud-provider="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309410 4728 flags.go:64] FLAG: --cluster-dns="[]" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309429 4728 flags.go:64] FLAG: --cluster-domain="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309441 4728 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309454 4728 flags.go:64] FLAG: --config-dir="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309465 4728 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309478 4728 flags.go:64] FLAG: --container-log-max-files="5" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309493 4728 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309504 4728 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309517 4728 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309529 4728 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309541 4728 flags.go:64] FLAG: --contention-profiling="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309553 4728 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309565 4728 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309579 4728 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309591 4728 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309605 4728 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309617 4728 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309629 4728 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309640 4728 flags.go:64] FLAG: --enable-load-reader="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309653 4728 flags.go:64] FLAG: --enable-server="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309664 4728 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309679 4728 flags.go:64] FLAG: --event-burst="100" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309691 4728 flags.go:64] FLAG: --event-qps="50" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309703 4728 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309715 4728 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309727 4728 flags.go:64] FLAG: --eviction-hard="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309741 4728 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309785 4728 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309800 4728 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309812 4728 flags.go:64] FLAG: --eviction-soft="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309824 4728 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309836 4728 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309847 4728 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309859 4728 flags.go:64] FLAG: --experimental-mounter-path="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309872 4728 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309884 4728 flags.go:64] FLAG: --fail-swap-on="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309896 4728 flags.go:64] FLAG: --feature-gates="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309910 4728 flags.go:64] FLAG: --file-check-frequency="20s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309922 4728 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309934 4728 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309947 4728 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309958 4728 flags.go:64] FLAG: --healthz-port="10248" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309971 4728 flags.go:64] FLAG: --help="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309983 4728 flags.go:64] FLAG: --hostname-override="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.309994 4728 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310008 4728 flags.go:64] FLAG: --http-check-frequency="20s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310020 4728 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310031 4728 flags.go:64] FLAG: --image-credential-provider-config="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310043 4728 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310055 4728 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310067 4728 flags.go:64] FLAG: --image-service-endpoint="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310078 4728 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310090 4728 flags.go:64] FLAG: --kube-api-burst="100" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310101 4728 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310114 4728 flags.go:64] FLAG: --kube-api-qps="50" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310129 4728 flags.go:64] FLAG: --kube-reserved="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310140 4728 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310152 4728 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310164 4728 flags.go:64] FLAG: --kubelet-cgroups="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310176 4728 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310187 4728 flags.go:64] FLAG: --lock-file="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310199 4728 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310210 4728 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310223 4728 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310241 4728 flags.go:64] FLAG: --log-json-split-stream="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310253 4728 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310264 4728 flags.go:64] FLAG: --log-text-split-stream="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310275 4728 flags.go:64] FLAG: --logging-format="text" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310287 4728 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310299 4728 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310310 4728 flags.go:64] FLAG: --manifest-url="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310322 4728 flags.go:64] FLAG: --manifest-url-header="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310337 4728 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310349 4728 flags.go:64] FLAG: --max-open-files="1000000" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310364 4728 flags.go:64] FLAG: --max-pods="110" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310376 4728 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310389 4728 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310400 4728 flags.go:64] FLAG: --memory-manager-policy="None" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310412 4728 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310424 4728 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310435 4728 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310447 4728 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310475 4728 flags.go:64] FLAG: --node-status-max-images="50" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310487 4728 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310501 4728 flags.go:64] FLAG: --oom-score-adj="-999" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310513 4728 flags.go:64] FLAG: --pod-cidr="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310524 4728 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310543 4728 flags.go:64] FLAG: --pod-manifest-path="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310554 4728 flags.go:64] FLAG: --pod-max-pids="-1" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310566 4728 flags.go:64] FLAG: --pods-per-core="0" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310578 4728 flags.go:64] FLAG: --port="10250" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310592 4728 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310604 4728 flags.go:64] FLAG: --provider-id="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310615 4728 flags.go:64] FLAG: --qos-reserved="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310627 4728 flags.go:64] FLAG: --read-only-port="10255" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310639 4728 flags.go:64] FLAG: --register-node="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310650 4728 flags.go:64] FLAG: --register-schedulable="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310662 4728 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310682 4728 flags.go:64] FLAG: --registry-burst="10" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310694 4728 flags.go:64] FLAG: --registry-qps="5" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310705 4728 flags.go:64] FLAG: --reserved-cpus="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310716 4728 flags.go:64] FLAG: --reserved-memory="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310731 4728 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310743 4728 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310787 4728 flags.go:64] FLAG: --rotate-certificates="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310800 4728 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310812 4728 flags.go:64] FLAG: --runonce="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310824 4728 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310836 4728 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310847 4728 flags.go:64] FLAG: --seccomp-default="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310859 4728 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310871 4728 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310883 4728 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310896 4728 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310908 4728 flags.go:64] FLAG: --storage-driver-password="root" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310919 4728 flags.go:64] FLAG: --storage-driver-secure="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310930 4728 flags.go:64] FLAG: --storage-driver-table="stats" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310942 4728 flags.go:64] FLAG: --storage-driver-user="root" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310954 4728 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310966 4728 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310977 4728 flags.go:64] FLAG: --system-cgroups="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.310988 4728 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311009 4728 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311021 4728 flags.go:64] FLAG: --tls-cert-file="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311032 4728 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311048 4728 flags.go:64] FLAG: --tls-min-version="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311059 4728 flags.go:64] FLAG: --tls-private-key-file="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311072 4728 flags.go:64] FLAG: --topology-manager-policy="none" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311084 4728 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311096 4728 flags.go:64] FLAG: --topology-manager-scope="container" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311108 4728 flags.go:64] FLAG: --v="2" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311134 4728 flags.go:64] FLAG: --version="false" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311149 4728 flags.go:64] FLAG: --vmodule="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311163 4728 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.311175 4728 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311466 4728 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311481 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311493 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311504 4728 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311517 4728 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311528 4728 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311539 4728 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311559 4728 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311572 4728 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311584 4728 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311594 4728 feature_gate.go:330] unrecognized feature gate: Example Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311604 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311614 4728 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311624 4728 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311634 4728 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311644 4728 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311653 4728 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311663 4728 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311672 4728 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311683 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311693 4728 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311703 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311713 4728 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311723 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311739 4728 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311779 4728 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311792 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311802 4728 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311857 4728 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311869 4728 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311879 4728 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311890 4728 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311899 4728 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311909 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311920 4728 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311930 4728 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311940 4728 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311950 4728 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311960 4728 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311974 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311984 4728 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.311994 4728 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312007 4728 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312021 4728 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312034 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312047 4728 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312059 4728 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312070 4728 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312084 4728 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312097 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312109 4728 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312119 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312129 4728 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312140 4728 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312154 4728 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312165 4728 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312182 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312193 4728 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312204 4728 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312215 4728 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312225 4728 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312235 4728 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312244 4728 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312254 4728 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312267 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312277 4728 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312288 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312297 4728 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312307 4728 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312317 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.312327 4728 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.313284 4728 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.322416 4728 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.322445 4728 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322524 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322531 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322536 4728 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322541 4728 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322545 4728 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322548 4728 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322552 4728 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322556 4728 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322559 4728 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322563 4728 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322566 4728 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322570 4728 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322573 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322577 4728 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322581 4728 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322584 4728 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322588 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322591 4728 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322595 4728 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322598 4728 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322602 4728 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322605 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322609 4728 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322612 4728 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322616 4728 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322619 4728 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322623 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322627 4728 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322630 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322634 4728 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322637 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322641 4728 feature_gate.go:330] unrecognized feature gate: Example Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322644 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322649 4728 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322658 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322663 4728 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322667 4728 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322671 4728 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322675 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322678 4728 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322682 4728 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322685 4728 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322690 4728 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322694 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322698 4728 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322702 4728 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322705 4728 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322709 4728 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322712 4728 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322715 4728 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322720 4728 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322725 4728 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322728 4728 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322732 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322736 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322739 4728 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322743 4728 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322762 4728 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322767 4728 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322771 4728 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322774 4728 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322778 4728 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322781 4728 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322786 4728 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322790 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322794 4728 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322797 4728 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322801 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322805 4728 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322808 4728 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322813 4728 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.322819 4728 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322940 4728 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322948 4728 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322952 4728 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322956 4728 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322960 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322964 4728 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322968 4728 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322972 4728 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322976 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322979 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322983 4728 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322987 4728 feature_gate.go:330] unrecognized feature gate: Example Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322990 4728 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322994 4728 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.322997 4728 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323001 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323005 4728 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323008 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323012 4728 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323016 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323019 4728 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323023 4728 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323027 4728 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323030 4728 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323034 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323037 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323041 4728 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323044 4728 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323049 4728 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323055 4728 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323060 4728 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323064 4728 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323068 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323072 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323076 4728 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323081 4728 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323086 4728 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323090 4728 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323094 4728 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323098 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323101 4728 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323105 4728 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323109 4728 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323112 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323116 4728 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323120 4728 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323123 4728 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323126 4728 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323131 4728 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323135 4728 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323141 4728 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323145 4728 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323149 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323154 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323159 4728 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323164 4728 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323169 4728 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323173 4728 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323178 4728 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323182 4728 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323185 4728 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323189 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323193 4728 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323196 4728 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323200 4728 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323203 4728 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323206 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323210 4728 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323213 4728 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323217 4728 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.323221 4728 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.323227 4728 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.324117 4728 server.go:940] "Client rotation is on, will bootstrap in background" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.328301 4728 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.328406 4728 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.329818 4728 server.go:997] "Starting client certificate rotation" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.329833 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.330030 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-01 03:51:42.121641209 +0000 UTC Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.330159 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.353176 4728 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.355945 4728 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.357111 4728 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.378204 4728 log.go:25] "Validated CRI v1 runtime API" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.412341 4728 log.go:25] "Validated CRI v1 image API" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.414877 4728 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.422777 4728 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-04-11-22-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.422872 4728 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.452986 4728 manager.go:217] Machine: {Timestamp:2026-02-04 11:27:31.44855602 +0000 UTC m=+0.591260435 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:66cbc6ec-a45e-4a6f-aa22-486f7addb0e0 BootID:1f12b397-1ee0-403e-83d4-9817c484418d Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:43:14:34 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:43:14:34 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e1:43:8d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a8:ae:6b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1f:fe:32 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:59:f6:fd Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0e:e1:f1:11:58:e6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:66:31:b1:ce:c4:2f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.453456 4728 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.453729 4728 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.455375 4728 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.455732 4728 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.455871 4728 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.456200 4728 topology_manager.go:138] "Creating topology manager with none policy" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.456217 4728 container_manager_linux.go:303] "Creating device plugin manager" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.456892 4728 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.456951 4728 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.457719 4728 state_mem.go:36] "Initialized new in-memory state store" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.458223 4728 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.461772 4728 kubelet.go:418] "Attempting to sync node with API server" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.461813 4728 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.461869 4728 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.461888 4728 kubelet.go:324] "Adding apiserver pod source" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.461905 4728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.466326 4728 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.467620 4728 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.468354 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.468472 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.468691 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.468628 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.470829 4728 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472529 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472574 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472588 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472602 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472624 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472640 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472654 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472677 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472692 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472705 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472735 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.472749 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.473957 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.474963 4728 server.go:1280] "Started kubelet" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.476033 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:31 crc systemd[1]: Started Kubernetes Kubelet. Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.481897 4728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.483195 4728 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.481856 4728 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.491515 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.491559 4728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.492351 4728 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.492365 4728 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.492466 4728 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.492938 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.493029 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:30:59.850857037 +0000 UTC Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.493073 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="200ms" Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.493802 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.493899 4728 server.go:460] "Adding debug handlers to kubelet server" Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.493947 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.494302 4728 factory.go:55] Registering systemd factory Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.494332 4728 factory.go:221] Registration of the systemd container factory successfully Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.493107 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.128:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18910791d5fa36e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-04 11:27:31.474912995 +0000 UTC m=+0.617617410,LastTimestamp:2026-02-04 11:27:31.474912995 +0000 UTC m=+0.617617410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.494830 4728 factory.go:153] Registering CRI-O factory Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.494874 4728 factory.go:221] Registration of the crio container factory successfully Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.495045 4728 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.495108 4728 factory.go:103] Registering Raw factory Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.495145 4728 manager.go:1196] Started watching for new ooms in manager Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.496433 4728 manager.go:319] Starting recovery of all containers Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504150 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504205 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504219 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504235 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504247 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504259 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504270 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504281 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504294 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504305 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504325 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504338 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504349 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504362 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504373 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504383 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504394 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504404 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504415 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504427 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504437 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504448 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504463 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504475 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504487 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504499 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504544 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504573 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504604 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504619 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504670 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504683 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504696 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504710 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504724 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504736 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504747 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504780 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504809 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504826 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504850 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504866 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504882 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504899 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504911 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504922 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504969 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504983 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.504995 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505007 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505019 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505030 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505045 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505058 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505071 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505082 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505098 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505111 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505122 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505134 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505146 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505157 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505169 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505181 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505195 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505210 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505226 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505241 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505266 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505278 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505293 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505303 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505314 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505325 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505336 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505349 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505360 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505370 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505381 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505391 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505402 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505413 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505424 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.505435 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510017 4728 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510075 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510096 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510113 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510128 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510146 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510161 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510179 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510192 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510213 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510239 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510259 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510275 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510291 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510309 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510326 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510342 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510358 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510376 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510392 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510410 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510437 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510457 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510724 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510831 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510884 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.510912 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511158 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511193 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511216 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511252 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511283 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511320 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511346 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511374 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511395 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511420 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511447 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511464 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511487 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511505 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511539 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511565 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511583 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511602 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511640 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511657 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511690 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511715 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511786 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511814 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511830 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511855 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511875 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511894 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511928 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511952 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.511977 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512005 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512031 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512056 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512081 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512105 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512123 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512140 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512174 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512192 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512209 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512231 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512248 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512270 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512288 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512305 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512335 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512352 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512503 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512544 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512564 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512602 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512621 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512645 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512674 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512693 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512726 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512780 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512807 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512827 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512853 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512875 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512892 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512915 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512934 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512959 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.512982 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513001 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513023 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513040 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513056 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513079 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513096 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513115 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513138 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513163 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513189 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513213 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513232 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513256 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513272 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513294 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513312 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513328 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513350 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513368 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513390 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513408 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513424 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513445 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513461 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513483 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513501 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513518 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513539 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513555 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513578 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513594 4728 reconstruct.go:97] "Volume reconstruction finished" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.513605 4728 reconciler.go:26] "Reconciler: start to sync state" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.535327 4728 manager.go:324] Recovery completed Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.547325 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.549014 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.549040 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.549048 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.549805 4728 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.549826 4728 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.549842 4728 state_mem.go:36] "Initialized new in-memory state store" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.550197 4728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.552320 4728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.552383 4728 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.552430 4728 kubelet.go:2335] "Starting kubelet main sync loop" Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.552508 4728 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 04 11:27:31 crc kubenswrapper[4728]: W0204 11:27:31.553078 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.553127 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.569935 4728 policy_none.go:49] "None policy: Start" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.571282 4728 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.571309 4728 state_mem.go:35] "Initializing new in-memory state store" Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.593031 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.617604 4728 manager.go:334] "Starting Device Plugin manager" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.617677 4728 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.617689 4728 server.go:79] "Starting device plugin registration server" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.618156 4728 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.618174 4728 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.618424 4728 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.618732 4728 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.618771 4728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.624418 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.652693 4728 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.652899 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.654497 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.654567 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.654593 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.654843 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.655381 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.655481 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.656594 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.656624 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.656635 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.656804 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.656926 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.656971 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.657297 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.657336 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.657349 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.658008 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.658036 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.658046 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.658125 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.658341 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.658380 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.658392 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.658509 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.658549 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.659000 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.659026 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.659034 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.659158 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.659288 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.659326 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.660070 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.660095 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.660106 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.660131 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.660158 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.660173 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.660629 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.660663 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.660677 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.660871 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.660910 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.661626 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.661659 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.661675 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.694122 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="400ms" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715005 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715053 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715077 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715099 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715121 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715143 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715270 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715329 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715361 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715391 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715423 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715478 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715519 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715538 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.715553 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.719587 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.720983 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.721020 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.721031 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.721054 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.721769 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817196 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817256 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817280 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817307 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817332 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817378 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817399 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817421 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817445 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817438 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817528 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817569 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817573 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817465 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817574 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817596 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817603 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817608 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817616 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817454 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817714 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817738 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817828 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817898 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817853 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.818017 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.818016 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.817980 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.818066 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.818114 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.922417 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.923815 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.923866 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.923879 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.923904 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 11:27:31 crc kubenswrapper[4728]: E0204 11:27:31.924380 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.978264 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:31 crc kubenswrapper[4728]: I0204 11:27:31.994198 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.014573 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.021239 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.025387 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 04 11:27:32 crc kubenswrapper[4728]: W0204 11:27:32.036638 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-004d7c5d342142441899ae47fd7596686ed29bea2348c9d020e214a72ee33a70 WatchSource:0}: Error finding container 004d7c5d342142441899ae47fd7596686ed29bea2348c9d020e214a72ee33a70: Status 404 returned error can't find the container with id 004d7c5d342142441899ae47fd7596686ed29bea2348c9d020e214a72ee33a70 Feb 04 11:27:32 crc kubenswrapper[4728]: W0204 11:27:32.039881 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-97bf7762b4a0ebca84edc0b7087b2f21eb9cf249ed38a859c681cea497d34ccd WatchSource:0}: Error finding container 97bf7762b4a0ebca84edc0b7087b2f21eb9cf249ed38a859c681cea497d34ccd: Status 404 returned error can't find the container with id 97bf7762b4a0ebca84edc0b7087b2f21eb9cf249ed38a859c681cea497d34ccd Feb 04 11:27:32 crc kubenswrapper[4728]: W0204 11:27:32.052340 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4e43e2299dcb3ff9c4bdd9ac682aafbd142afd9644e9f6c45451b34420d9b30c WatchSource:0}: Error finding container 4e43e2299dcb3ff9c4bdd9ac682aafbd142afd9644e9f6c45451b34420d9b30c: Status 404 returned error can't find the container with id 4e43e2299dcb3ff9c4bdd9ac682aafbd142afd9644e9f6c45451b34420d9b30c Feb 04 11:27:32 crc kubenswrapper[4728]: W0204 11:27:32.054640 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-74728a1b93ef2ce8aaac5c59acdde53ed23ccc5ee0c0cebc44b1ed7d5d00c293 WatchSource:0}: Error finding container 74728a1b93ef2ce8aaac5c59acdde53ed23ccc5ee0c0cebc44b1ed7d5d00c293: Status 404 returned error can't find the container with id 74728a1b93ef2ce8aaac5c59acdde53ed23ccc5ee0c0cebc44b1ed7d5d00c293 Feb 04 11:27:32 crc kubenswrapper[4728]: W0204 11:27:32.057125 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d10e41ea36bc38f2b98080b59b388e0ac3eeca1d23f23e77e0a9faf49798b18e WatchSource:0}: Error finding container d10e41ea36bc38f2b98080b59b388e0ac3eeca1d23f23e77e0a9faf49798b18e: Status 404 returned error can't find the container with id d10e41ea36bc38f2b98080b59b388e0ac3eeca1d23f23e77e0a9faf49798b18e Feb 04 11:27:32 crc kubenswrapper[4728]: E0204 11:27:32.095736 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="800ms" Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.324999 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.326403 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.326441 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.326450 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.326475 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 11:27:32 crc kubenswrapper[4728]: E0204 11:27:32.326966 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Feb 04 11:27:32 crc kubenswrapper[4728]: W0204 11:27:32.400220 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:32 crc kubenswrapper[4728]: E0204 11:27:32.400284 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.477305 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.493480 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:24:15.861024471 +0000 UTC Feb 04 11:27:32 crc kubenswrapper[4728]: W0204 11:27:32.550704 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:32 crc kubenswrapper[4728]: E0204 11:27:32.551064 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.557274 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"97bf7762b4a0ebca84edc0b7087b2f21eb9cf249ed38a859c681cea497d34ccd"} Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.559947 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"004d7c5d342142441899ae47fd7596686ed29bea2348c9d020e214a72ee33a70"} Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.560506 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d10e41ea36bc38f2b98080b59b388e0ac3eeca1d23f23e77e0a9faf49798b18e"} Feb 04 11:27:32 crc kubenswrapper[4728]: W0204 11:27:32.561517 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:32 crc kubenswrapper[4728]: E0204 11:27:32.561565 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.561635 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"74728a1b93ef2ce8aaac5c59acdde53ed23ccc5ee0c0cebc44b1ed7d5d00c293"} Feb 04 11:27:32 crc kubenswrapper[4728]: I0204 11:27:32.564117 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4e43e2299dcb3ff9c4bdd9ac682aafbd142afd9644e9f6c45451b34420d9b30c"} Feb 04 11:27:32 crc kubenswrapper[4728]: W0204 11:27:32.683026 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:32 crc kubenswrapper[4728]: E0204 11:27:32.683126 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:32 crc kubenswrapper[4728]: E0204 11:27:32.897220 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="1.6s" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.127127 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.128863 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.128894 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.128903 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.128923 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 11:27:33 crc kubenswrapper[4728]: E0204 11:27:33.129337 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.409370 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 04 11:27:33 crc kubenswrapper[4728]: E0204 11:27:33.410580 4728 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.477219 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.493603 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:55:20.689671275 +0000 UTC Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.569349 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec" exitCode=0 Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.569504 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec"} Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.569546 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.571155 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.571220 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.571237 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.572982 4728 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d16713485cc51cbaf2fcdfee53c58a94d76d1e87839108a14745a42efd0f46a3" exitCode=0 Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.573072 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d16713485cc51cbaf2fcdfee53c58a94d76d1e87839108a14745a42efd0f46a3"} Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.573149 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.573765 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.574278 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.574319 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.574335 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.574573 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.574595 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.574606 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.576227 4728 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8" exitCode=0 Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.576326 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8"} Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.576431 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.577799 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.577839 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.577853 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.579040 4728 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9b7daa3791c1798436182e3caa17966b437685ca2bf98993f9ff1a1d497fd3b0" exitCode=0 Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.579220 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.579238 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9b7daa3791c1798436182e3caa17966b437685ca2bf98993f9ff1a1d497fd3b0"} Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.580272 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.580308 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.580344 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.583189 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1"} Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.583264 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.583270 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5"} Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.583355 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297"} Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.583376 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594"} Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.584088 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.584139 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:33 crc kubenswrapper[4728]: I0204 11:27:33.584158 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:34 crc kubenswrapper[4728]: W0204 11:27:34.374145 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:34 crc kubenswrapper[4728]: E0204 11:27:34.374227 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.477219 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.493745 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:56:09.07294374 +0000 UTC Feb 04 11:27:34 crc kubenswrapper[4728]: E0204 11:27:34.499612 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="3.2s" Feb 04 11:27:34 crc kubenswrapper[4728]: W0204 11:27:34.554302 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:34 crc kubenswrapper[4728]: E0204 11:27:34.554383 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.587933 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"94d799590335fcabe32dc02c41327fcbc320eeb2473371a6e0ece9fd6a072a65"} Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.587992 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.588837 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.588887 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.588903 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.590914 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"accafd01300b371d7bbbdc0bad973b259a3b9772022ddb99804c50e31a913a5e"} Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.590950 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770"} Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.590962 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34"} Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.590973 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180"} Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.590983 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f"} Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.591074 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.591856 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.591883 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.591890 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.594001 4728 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="55f2e53d55216a6594516ef44a5e1a074366927648ce02cebd8557bd14573474" exitCode=0 Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.594053 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"55f2e53d55216a6594516ef44a5e1a074366927648ce02cebd8557bd14573474"} Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.594145 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:34 crc kubenswrapper[4728]: W0204 11:27:34.594911 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Feb 04 11:27:34 crc kubenswrapper[4728]: E0204 11:27:34.594973 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.594985 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.595010 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.595022 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.597584 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.597625 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.597586 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75"} Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.597666 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f"} Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.597687 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af"} Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.598469 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.598504 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.598516 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.598592 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.598619 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.598633 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.729916 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.736095 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.736129 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.736140 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:34 crc kubenswrapper[4728]: I0204 11:27:34.736163 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 11:27:34 crc kubenswrapper[4728]: E0204 11:27:34.736556 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.494102 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 12:23:56.732838038 +0000 UTC Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.602681 4728 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3bac0d562240b4b50ffe404e92451c62dbd5a554f04583953c3f9c2119091c9b" exitCode=0 Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.602857 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.602871 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.602916 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.602847 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3bac0d562240b4b50ffe404e92451c62dbd5a554f04583953c3f9c2119091c9b"} Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.602960 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.602983 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.603031 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604343 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604372 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604349 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604399 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604441 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604464 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604381 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604405 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604522 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604410 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604547 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:35 crc kubenswrapper[4728]: I0204 11:27:35.604531 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.494323 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 12:38:41.342882448 +0000 UTC Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.609068 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d361a6a307259cfffbb9b0db8434558eb2ed21c48ca68cf29b69d9bbfac0d410"} Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.609122 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b1442856ad417a9a8d16b8c8a48aa75dc9c0023bfa3563da0cd35698cd266e3"} Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.609140 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"628d6e455bcd6e8569b9b9fba591ff066202c5cfa8da3e0751cc132a08edb221"} Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.609157 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"10c7d9a731ed5d1c8cdc76f061d2d2d89b7a67a0f5bedb1b293c5292030940de"} Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.609173 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"087c4568dee348585cdb9a2cc6b2ca4ad29678673df7c9795d6330a1d780b492"} Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.609319 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.610198 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.610244 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.610257 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.859154 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.859347 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.859385 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.860574 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.860610 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:36 crc kubenswrapper[4728]: I0204 11:27:36.860622 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.261833 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.494483 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:08:30.916550477 +0000 UTC Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.581086 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.612313 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.612444 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.613799 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.614263 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.614281 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.613868 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.614451 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.614467 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.937034 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.938842 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.938878 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.938889 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:37 crc kubenswrapper[4728]: I0204 11:27:37.938914 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.290053 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.290420 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.292289 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.292339 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.292354 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.299563 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.419999 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.420370 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.421812 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.421873 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.421890 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.495248 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:43:57.368573464 +0000 UTC Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.614929 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.616339 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.616406 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.616430 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.621810 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.622044 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.623647 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.623710 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:38 crc kubenswrapper[4728]: I0204 11:27:38.623733 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:39 crc kubenswrapper[4728]: I0204 11:27:39.496346 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:23:00.045189629 +0000 UTC Feb 04 11:27:39 crc kubenswrapper[4728]: I0204 11:27:39.516784 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 04 11:27:39 crc kubenswrapper[4728]: I0204 11:27:39.517070 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:39 crc kubenswrapper[4728]: I0204 11:27:39.519125 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:39 crc kubenswrapper[4728]: I0204 11:27:39.519184 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:39 crc kubenswrapper[4728]: I0204 11:27:39.519201 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:39 crc kubenswrapper[4728]: I0204 11:27:39.594019 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:39 crc kubenswrapper[4728]: I0204 11:27:39.616873 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:39 crc kubenswrapper[4728]: I0204 11:27:39.617788 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:39 crc kubenswrapper[4728]: I0204 11:27:39.617838 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:39 crc kubenswrapper[4728]: I0204 11:27:39.617862 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:40 crc kubenswrapper[4728]: I0204 11:27:40.497086 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:33:15.178327847 +0000 UTC Feb 04 11:27:41 crc kubenswrapper[4728]: I0204 11:27:41.059391 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:41 crc kubenswrapper[4728]: I0204 11:27:41.059636 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:41 crc kubenswrapper[4728]: I0204 11:27:41.061145 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:41 crc kubenswrapper[4728]: I0204 11:27:41.061193 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:41 crc kubenswrapper[4728]: I0204 11:27:41.061207 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:41 crc kubenswrapper[4728]: I0204 11:27:41.497559 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:56:13.038407487 +0000 UTC Feb 04 11:27:41 crc kubenswrapper[4728]: I0204 11:27:41.502792 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 04 11:27:41 crc kubenswrapper[4728]: I0204 11:27:41.503037 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:41 crc kubenswrapper[4728]: I0204 11:27:41.504552 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:41 crc kubenswrapper[4728]: I0204 11:27:41.504605 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:41 crc kubenswrapper[4728]: I0204 11:27:41.504623 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:41 crc kubenswrapper[4728]: E0204 11:27:41.624693 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 04 11:27:42 crc kubenswrapper[4728]: I0204 11:27:42.498595 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:04:57.458584719 +0000 UTC Feb 04 11:27:43 crc kubenswrapper[4728]: I0204 11:27:43.097303 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:43 crc kubenswrapper[4728]: I0204 11:27:43.097479 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:43 crc kubenswrapper[4728]: I0204 11:27:43.098831 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:43 crc kubenswrapper[4728]: I0204 11:27:43.098909 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:43 crc kubenswrapper[4728]: I0204 11:27:43.098933 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:43 crc kubenswrapper[4728]: I0204 11:27:43.102252 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:43 crc kubenswrapper[4728]: I0204 11:27:43.499061 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:27:25.439040069 +0000 UTC Feb 04 11:27:43 crc kubenswrapper[4728]: I0204 11:27:43.626412 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:43 crc kubenswrapper[4728]: I0204 11:27:43.627789 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:43 crc kubenswrapper[4728]: I0204 11:27:43.627834 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:43 crc kubenswrapper[4728]: I0204 11:27:43.627847 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:44 crc kubenswrapper[4728]: I0204 11:27:44.059698 4728 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 04 11:27:44 crc kubenswrapper[4728]: I0204 11:27:44.059806 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 11:27:44 crc kubenswrapper[4728]: I0204 11:27:44.499813 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:16:09.674568401 +0000 UTC Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.357046 4728 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49466->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.357120 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49466->192.168.126.11:17697: read: connection reset by peer" Feb 04 11:27:45 crc kubenswrapper[4728]: W0204 11:27:45.370214 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.370316 4728 trace.go:236] Trace[740646229]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Feb-2026 11:27:35.369) (total time: 10000ms): Feb 04 11:27:45 crc kubenswrapper[4728]: Trace[740646229]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:27:45.370) Feb 04 11:27:45 crc kubenswrapper[4728]: Trace[740646229]: [10.000803122s] [10.000803122s] END Feb 04 11:27:45 crc kubenswrapper[4728]: E0204 11:27:45.370339 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.478185 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.500599 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:41:40.537091834 +0000 UTC Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.633707 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.636097 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="accafd01300b371d7bbbdc0bad973b259a3b9772022ddb99804c50e31a913a5e" exitCode=255 Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.636150 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"accafd01300b371d7bbbdc0bad973b259a3b9772022ddb99804c50e31a913a5e"} Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.636338 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.637263 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.637308 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.637327 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.637880 4728 scope.go:117] "RemoveContainer" containerID="accafd01300b371d7bbbdc0bad973b259a3b9772022ddb99804c50e31a913a5e" Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.746521 4728 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.746596 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.766687 4728 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 04 11:27:45 crc kubenswrapper[4728]: I0204 11:27:45.766780 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 04 11:27:46 crc kubenswrapper[4728]: I0204 11:27:46.500731 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:37:17.834619801 +0000 UTC Feb 04 11:27:46 crc kubenswrapper[4728]: I0204 11:27:46.641745 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 04 11:27:46 crc kubenswrapper[4728]: I0204 11:27:46.643826 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14"} Feb 04 11:27:46 crc kubenswrapper[4728]: I0204 11:27:46.644008 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:46 crc kubenswrapper[4728]: I0204 11:27:46.644979 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:46 crc kubenswrapper[4728]: I0204 11:27:46.645019 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:46 crc kubenswrapper[4728]: I0204 11:27:46.645033 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:47 crc kubenswrapper[4728]: I0204 11:27:47.261872 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:47 crc kubenswrapper[4728]: I0204 11:27:47.501801 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:44:35.724115059 +0000 UTC Feb 04 11:27:47 crc kubenswrapper[4728]: I0204 11:27:47.646468 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:47 crc kubenswrapper[4728]: I0204 11:27:47.647592 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:47 crc kubenswrapper[4728]: I0204 11:27:47.647645 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:47 crc kubenswrapper[4728]: I0204 11:27:47.647659 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:48 crc kubenswrapper[4728]: I0204 11:27:48.502070 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:09:17.394816077 +0000 UTC Feb 04 11:27:48 crc kubenswrapper[4728]: I0204 11:27:48.629809 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:48 crc kubenswrapper[4728]: I0204 11:27:48.648227 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:48 crc kubenswrapper[4728]: I0204 11:27:48.649397 4728 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 04 11:27:48 crc kubenswrapper[4728]: I0204 11:27:48.649420 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:48 crc kubenswrapper[4728]: I0204 11:27:48.649459 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:48 crc kubenswrapper[4728]: I0204 11:27:48.649477 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:48 crc kubenswrapper[4728]: I0204 11:27:48.653443 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.502384 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:06:18.61871228 +0000 UTC Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.542716 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.542898 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.544110 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.544191 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.544217 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.556539 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.650048 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.650090 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.651374 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.651412 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.651418 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.651424 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.651450 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:49 crc kubenswrapper[4728]: I0204 11:27:49.651628 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:50 crc kubenswrapper[4728]: I0204 11:27:50.503435 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:26:08.046818956 +0000 UTC Feb 04 11:27:50 crc kubenswrapper[4728]: E0204 11:27:50.741311 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 04 11:27:50 crc kubenswrapper[4728]: I0204 11:27:50.744998 4728 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 04 11:27:50 crc kubenswrapper[4728]: I0204 11:27:50.745169 4728 trace.go:236] Trace[1270285713]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Feb-2026 11:27:39.140) (total time: 11604ms): Feb 04 11:27:50 crc kubenswrapper[4728]: Trace[1270285713]: ---"Objects listed" error: 11604ms (11:27:50.745) Feb 04 11:27:50 crc kubenswrapper[4728]: Trace[1270285713]: [11.604120595s] [11.604120595s] END Feb 04 11:27:50 crc kubenswrapper[4728]: I0204 11:27:50.745216 4728 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 04 11:27:50 crc kubenswrapper[4728]: E0204 11:27:50.748500 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 04 11:27:50 crc kubenswrapper[4728]: I0204 11:27:50.748887 4728 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 04 11:27:50 crc kubenswrapper[4728]: I0204 11:27:50.753671 4728 trace.go:236] Trace[1464434569]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Feb-2026 11:27:38.924) (total time: 11828ms): Feb 04 11:27:50 crc kubenswrapper[4728]: Trace[1464434569]: ---"Objects listed" error: 11828ms (11:27:50.753) Feb 04 11:27:50 crc kubenswrapper[4728]: Trace[1464434569]: [11.82870104s] [11.82870104s] END Feb 04 11:27:50 crc kubenswrapper[4728]: I0204 11:27:50.753732 4728 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 04 11:27:50 crc kubenswrapper[4728]: I0204 11:27:50.754445 4728 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.101945 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.105207 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.473980 4728 apiserver.go:52] "Watching apiserver" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.477499 4728 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.478698 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.479468 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.479559 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.479842 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.479860 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.480033 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.480106 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.480113 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.480119 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.483729 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.486998 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.487365 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.487447 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.487524 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.487662 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.487663 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.487694 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.487847 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.489981 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.493075 4728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.505696 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:39:47.639228914 +0000 UTC Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.511476 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.526894 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.540222 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554271 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554346 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554388 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554426 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554472 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554504 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554575 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554608 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554640 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554670 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554740 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554778 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554747 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554786 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554865 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554896 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554920 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554948 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554975 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555004 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.554996 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555027 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555051 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555073 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555094 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555121 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555144 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555174 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555199 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555221 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555243 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555265 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555289 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555284 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555310 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555333 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555367 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555381 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555407 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555432 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555453 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555473 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555474 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555492 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555487 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555564 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555629 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555657 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555689 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555713 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555737 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555779 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555807 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555860 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555882 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555905 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555926 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555948 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555972 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555998 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556044 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556060 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556075 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556092 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556108 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556124 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556146 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556164 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556181 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556198 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556215 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556232 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556248 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556263 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556281 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556299 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556316 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556333 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556348 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556365 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556380 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556408 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556443 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556459 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556478 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556494 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556513 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556527 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556543 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556558 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556575 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556591 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556606 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556622 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556645 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557121 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557148 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557171 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557199 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557225 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557247 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557272 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557296 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557317 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557333 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557351 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557381 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557400 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557416 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557435 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557453 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557471 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557490 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557517 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557533 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557564 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557579 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557596 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557612 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557628 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557644 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557660 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557683 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557698 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557714 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557729 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557762 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557781 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557797 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557813 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557831 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557852 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557868 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557883 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557899 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557914 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557930 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557946 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557966 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557982 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.557998 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558016 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558032 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558050 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558068 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558084 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558103 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558118 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558136 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558154 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558172 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558188 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558207 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558224 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558240 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558256 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558272 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558287 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558303 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558321 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558337 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558354 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558370 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558389 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558406 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558424 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558441 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558456 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558474 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558492 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558510 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558527 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558543 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558559 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558578 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558597 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558615 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558634 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558652 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558669 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558684 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558702 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558719 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558735 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558769 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558785 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558802 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558819 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558835 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558853 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558870 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558889 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558905 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558921 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558937 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558953 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558969 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.558985 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559050 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559068 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559086 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559103 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559121 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559158 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559179 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559199 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559219 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559247 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559266 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559284 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559302 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559323 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559352 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559370 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559388 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559406 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559425 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559494 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559509 4728 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559519 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559529 4728 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559541 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559566 4728 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559583 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.559598 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.562030 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555583 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.576490 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555606 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555748 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555888 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555923 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.555943 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556048 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556091 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556150 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556209 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556255 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556352 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556534 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556658 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556723 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556841 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.556940 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.560965 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.561140 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.561446 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.561499 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.561633 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.561808 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.562096 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.562337 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.562550 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.562602 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.562710 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.562818 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.562862 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.562905 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.563005 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.563479 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.563794 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.563913 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.564048 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.564366 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.564482 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.564688 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.564825 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.564995 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.565311 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.565990 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.566029 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.569374 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.569700 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.569997 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.570200 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.570436 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.576903 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.570689 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.571061 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.571199 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.571214 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.571235 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.571429 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.571435 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.572092 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.572195 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.572172 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.572232 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.572539 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.572689 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.573012 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.573156 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.573221 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.573887 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.574113 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.574521 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.574734 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.575418 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.575483 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.575531 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.575624 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.576085 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.577168 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.576108 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.576165 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.576451 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.576582 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.576855 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.577227 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.577300 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.577595 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.577833 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.577921 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.580172 4728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.592048 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.592414 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.592471 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.598246 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.598660 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.599443 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.599634 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.599964 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.600011 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.600017 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.600007 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.600100 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.600219 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.600262 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.601110 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.601437 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.601675 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.601709 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.602147 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.602163 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.602417 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.602337 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.602499 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.603093 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.603193 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.603204 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.603457 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.603977 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.604644 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.604967 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605006 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605022 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.605095 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:27:52.105069969 +0000 UTC m=+21.247774354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605294 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605376 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605464 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605685 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605676 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.570945 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605839 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605876 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605903 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605946 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.606201 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.606307 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.606308 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.606338 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.606572 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.606844 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.606879 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.607072 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.607087 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.607536 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.608010 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.608507 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.608613 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.609001 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.609080 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.609139 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.609166 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.609404 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.610263 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.610653 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.611228 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.611460 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.611627 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.611937 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.611952 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.612032 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.605319 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.612505 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.612804 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.612995 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.613058 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.613254 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.613288 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.613409 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:52.113374001 +0000 UTC m=+21.256078396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.613765 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:52.113680818 +0000 UTC m=+21.256385203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.614492 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.615400 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.615851 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.615979 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.618339 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.621041 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.621641 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.622097 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.624233 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.625164 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.626416 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.632199 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.636668 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.638239 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.638826 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.638993 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.642918 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.643141 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.644377 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.644078 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.640883 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.645032 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.645341 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.647834 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.648390 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.648636 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.649326 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.649494 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.651094 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.652108 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661228 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661283 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661383 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661395 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661404 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661412 4728 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661421 4728 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661430 4728 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661439 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661447 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661456 4728 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661464 4728 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661472 4728 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661481 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661489 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661503 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661512 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661521 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661529 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661538 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661555 4728 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661571 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661583 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661594 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661602 4728 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661611 4728 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661621 4728 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661629 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661638 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661647 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661656 4728 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661665 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661673 4728 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661681 4728 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661689 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661697 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661705 4728 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661716 4728 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661727 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661738 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661770 4728 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661786 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661797 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661808 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661821 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661832 4728 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661843 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661855 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661866 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661878 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661889 4728 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661899 4728 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661910 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661921 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661933 4728 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661943 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661954 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661965 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661976 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661986 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.661997 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662008 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662019 4728 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662028 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662036 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662043 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662051 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662063 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662071 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662079 4728 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662088 4728 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662097 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662107 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662116 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662124 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662132 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662140 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662148 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662201 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662246 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662277 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662649 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662668 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662683 4728 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662699 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662713 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662728 4728 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.662742 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665354 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665376 4728 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665405 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665421 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665436 4728 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665451 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665465 4728 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665479 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665493 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665507 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665520 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665535 4728 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665549 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665564 4728 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665581 4728 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665596 4728 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665611 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665626 4728 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665639 4728 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665653 4728 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665667 4728 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665680 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665694 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665708 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665722 4728 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665735 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665771 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665785 4728 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665798 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665811 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665829 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665919 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665951 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665964 4728 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665973 4728 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665983 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.665994 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666003 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666083 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666104 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666118 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666143 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666157 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666166 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666174 4728 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666183 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666192 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666200 4728 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666210 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666351 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666382 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666392 4728 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666401 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666410 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666422 4728 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666435 4728 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666448 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666460 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666476 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666490 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666503 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666637 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666668 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666678 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666690 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666700 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666709 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666703 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666718 4728 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666777 4728 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666794 4728 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666809 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666823 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666836 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666850 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666863 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666877 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666892 4728 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666905 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666917 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666930 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666944 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666956 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666970 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666983 4728 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.666996 4728 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667009 4728 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667022 4728 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667035 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667047 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667059 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667071 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667083 4728 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667096 4728 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667107 4728 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667119 4728 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667130 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667142 4728 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667152 4728 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.667164 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.667639 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.667656 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.667669 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.667727 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:52.16770811 +0000 UTC m=+21.310412555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.673513 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.673731 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.673788 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.673802 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.673861 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:52.173842455 +0000 UTC m=+21.316546840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.675421 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.682566 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: E0204 11:27:51.683713 4728 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.686178 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.687394 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.690437 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.700886 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.711429 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.721443 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.728329 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.735843 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.744874 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.756446 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.767558 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.767741 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.767785 4728 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.767797 4728 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.767809 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.797658 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.802847 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 04 11:27:51 crc kubenswrapper[4728]: W0204 11:27:51.809810 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-76e5c4171dcfef825a50a8b90a990c2ea9b0481728d5c31ab902d93149e13be6 WatchSource:0}: Error finding container 76e5c4171dcfef825a50a8b90a990c2ea9b0481728d5c31ab902d93149e13be6: Status 404 returned error can't find the container with id 76e5c4171dcfef825a50a8b90a990c2ea9b0481728d5c31ab902d93149e13be6 Feb 04 11:27:51 crc kubenswrapper[4728]: W0204 11:27:51.814235 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-7bbd08b71753448bfeea5c9c52605b3010e0e03de6bb2570acf4a13e9b25e0f3 WatchSource:0}: Error finding container 7bbd08b71753448bfeea5c9c52605b3010e0e03de6bb2570acf4a13e9b25e0f3: Status 404 returned error can't find the container with id 7bbd08b71753448bfeea5c9c52605b3010e0e03de6bb2570acf4a13e9b25e0f3 Feb 04 11:27:51 crc kubenswrapper[4728]: I0204 11:27:51.821684 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 04 11:27:51 crc kubenswrapper[4728]: W0204 11:27:51.839827 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-d4f26c919efa4a1577e5b466bbb26fbbdfdd33a4b148f36b0dba421417e3184b WatchSource:0}: Error finding container d4f26c919efa4a1577e5b466bbb26fbbdfdd33a4b148f36b0dba421417e3184b: Status 404 returned error can't find the container with id d4f26c919efa4a1577e5b466bbb26fbbdfdd33a4b148f36b0dba421417e3184b Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.169782 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.169865 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.169902 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.169944 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:27:53.169910761 +0000 UTC m=+22.312615196 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.169957 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.169994 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:53.169987033 +0000 UTC m=+22.312691418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.169990 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.170122 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.170159 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.170174 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.170195 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.170235 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:53.170215368 +0000 UTC m=+22.312919763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.170259 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:53.170244238 +0000 UTC m=+22.312948663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.271502 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.271744 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.271815 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.271864 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.271952 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:53.271926904 +0000 UTC m=+22.414631319 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.506651 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:12:53.528341515 +0000 UTC Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.669989 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd"} Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.670054 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a"} Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.670064 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7bbd08b71753448bfeea5c9c52605b3010e0e03de6bb2570acf4a13e9b25e0f3"} Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.672240 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7"} Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.672341 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"76e5c4171dcfef825a50a8b90a990c2ea9b0481728d5c31ab902d93149e13be6"} Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.674367 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.674897 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.677069 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14" exitCode=255 Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.677173 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14"} Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.677294 4728 scope.go:117] "RemoveContainer" containerID="accafd01300b371d7bbbdc0bad973b259a3b9772022ddb99804c50e31a913a5e" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.678873 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d4f26c919efa4a1577e5b466bbb26fbbdfdd33a4b148f36b0dba421417e3184b"} Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.692903 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.694457 4728 scope.go:117] "RemoveContainer" containerID="670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14" Feb 04 11:27:52 crc kubenswrapper[4728]: E0204 11:27:52.695038 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.698710 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.715665 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.730278 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.747496 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.764219 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.782801 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.796353 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.812119 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://accafd01300b371d7bbbdc0bad973b259a3b9772022ddb99804c50e31a913a5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:45Z\\\",\\\"message\\\":\\\"W0204 11:27:34.592203 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0204 11:27:34.593522 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770204454 cert, and key in /tmp/serving-cert-3485456890/serving-signer.crt, /tmp/serving-cert-3485456890/serving-signer.key\\\\nI0204 11:27:35.014141 1 observer_polling.go:159] Starting file observer\\\\nW0204 11:27:35.016821 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0204 11:27:35.017002 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:35.020400 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3485456890/tls.crt::/tmp/serving-cert-3485456890/tls.key\\\\\\\"\\\\nF0204 11:27:45.350166 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.827038 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.844252 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.862567 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.881262 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.897071 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.912935 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:52 crc kubenswrapper[4728]: I0204 11:27:52.924672 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.179810 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.179886 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.179910 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.179938 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.180015 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:27:55.179985607 +0000 UTC m=+24.322689992 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.180035 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.180063 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.180050 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.180156 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.180173 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.180133 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:55.18011192 +0000 UTC m=+24.322816315 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.180242 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:55.180220672 +0000 UTC m=+24.322925127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.180274 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:55.180262443 +0000 UTC m=+24.322966938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.280442 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.280605 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.280625 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.280639 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.280733 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:55.280710132 +0000 UTC m=+24.423414537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.507505 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:05:11.816255443 +0000 UTC Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.553055 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.553055 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.553244 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.553279 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.553386 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.553451 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.557331 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.558443 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.559659 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.560964 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.562134 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.563074 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.564225 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.565215 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.566292 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.567207 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.568893 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.570389 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.571206 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.571996 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.572727 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.573599 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.574706 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.575446 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.576509 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.577608 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.578495 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.579529 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.582058 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.583218 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.584679 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.585781 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.587610 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.588383 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.589344 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.589787 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.590243 4728 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.590685 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.592269 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.592720 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.593527 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.594959 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.595575 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.596558 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.597327 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.598612 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.599184 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.600427 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.601316 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.602375 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.602918 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.603888 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.604384 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.605608 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.606153 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.607142 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.607594 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.608519 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.609114 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.609555 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.683074 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.685917 4728 scope.go:117] "RemoveContainer" containerID="670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14" Feb 04 11:27:53 crc kubenswrapper[4728]: E0204 11:27:53.686206 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.699428 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:53Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.713199 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:53Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.727987 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:53Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.742922 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:53Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.763577 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:53Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.777118 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:53Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.790465 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:53Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:53 crc kubenswrapper[4728]: I0204 11:27:53.802188 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:53Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.230974 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.507661 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:07:27.200022981 +0000 UTC Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.694301 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf"} Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.695171 4728 scope.go:117] "RemoveContainer" containerID="670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14" Feb 04 11:27:54 crc kubenswrapper[4728]: E0204 11:27:54.695499 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.712835 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:54Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.731308 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:54Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.748630 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:54Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.767577 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:54Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.792018 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:54Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.805588 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:54Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.819283 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:54Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:54 crc kubenswrapper[4728]: I0204 11:27:54.833378 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:54Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:55 crc kubenswrapper[4728]: I0204 11:27:55.195044 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:55 crc kubenswrapper[4728]: I0204 11:27:55.195172 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:55 crc kubenswrapper[4728]: I0204 11:27:55.195216 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.195231 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:27:59.195208603 +0000 UTC m=+28.337912998 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:27:55 crc kubenswrapper[4728]: I0204 11:27:55.195270 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.195290 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.195370 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.195377 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:59.195358796 +0000 UTC m=+28.338063181 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.195481 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:59.195457628 +0000 UTC m=+28.338162073 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.195375 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.195519 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.195542 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.195599 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:59.195581511 +0000 UTC m=+28.338286036 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:55 crc kubenswrapper[4728]: I0204 11:27:55.296488 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.296647 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.296665 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.296675 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.296727 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 11:27:59.296712424 +0000 UTC m=+28.439416809 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:55 crc kubenswrapper[4728]: I0204 11:27:55.508465 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:05:38.018610798 +0000 UTC Feb 04 11:27:55 crc kubenswrapper[4728]: I0204 11:27:55.553072 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:27:55 crc kubenswrapper[4728]: I0204 11:27:55.553112 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:55 crc kubenswrapper[4728]: I0204 11:27:55.553072 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.553230 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.553324 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:27:55 crc kubenswrapper[4728]: E0204 11:27:55.553443 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:27:56 crc kubenswrapper[4728]: I0204 11:27:56.509044 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:45:21.663267563 +0000 UTC Feb 04 11:27:56 crc kubenswrapper[4728]: I0204 11:27:56.991700 4728 csr.go:261] certificate signing request csr-ztdp4 is approved, waiting to be issued Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.004317 4728 csr.go:257] certificate signing request csr-ztdp4 is issued Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.148665 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.150045 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.150072 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.150081 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.150131 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.156393 4728 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.156655 4728 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.157637 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.157678 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.157691 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.157705 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.157714 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: E0204 11:27:57.177176 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.180042 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.180077 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.180087 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.180105 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.180118 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: E0204 11:27:57.189917 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.192633 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.192671 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.192683 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.192698 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.192710 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: E0204 11:27:57.205212 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.208624 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.208708 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.208727 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.208767 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.208791 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: E0204 11:27:57.223581 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.226645 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.226667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.226674 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.226687 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.226696 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: E0204 11:27:57.239832 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: E0204 11:27:57.239940 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.241486 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.241510 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.241517 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.241547 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.241558 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.343620 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.343657 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.343668 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.343683 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.343695 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.445579 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.445619 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.445628 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.445640 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.445648 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.509321 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:20:48.612725758 +0000 UTC Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.548125 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.548163 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.548172 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.548186 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.548195 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.553425 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.553482 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:27:57 crc kubenswrapper[4728]: E0204 11:27:57.553539 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.553488 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:27:57 crc kubenswrapper[4728]: E0204 11:27:57.553651 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:27:57 crc kubenswrapper[4728]: E0204 11:27:57.553712 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.651329 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.651628 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.651641 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.651655 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.651666 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.746693 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dc6rd"] Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.746983 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gcj4t"] Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.747209 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.747487 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-grzvj"] Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.747977 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.748002 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.749645 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.749641 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tlf2v"] Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.749804 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.749969 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tlf2v" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.749985 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.749993 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.750108 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.750171 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.750266 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.750484 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.750575 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.750636 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.751381 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.753440 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.753465 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.753474 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.753488 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.753497 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.754425 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.754711 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.754827 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.755181 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.765088 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.778005 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.789910 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.805296 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.817001 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823265 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-cni-dir\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823302 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-socket-dir-parent\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823324 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-var-lib-cni-multus\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823345 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c8409df-def9-46a0-a813-6788ddf1e292-rootfs\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823365 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d86n\" (UniqueName: \"kubernetes.io/projected/3c8409df-def9-46a0-a813-6788ddf1e292-kube-api-access-5d86n\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823385 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c8409df-def9-46a0-a813-6788ddf1e292-mcd-auth-proxy-config\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823419 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-run-k8s-cni-cncf-io\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823439 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-os-release\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823460 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-run-multus-certs\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823480 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn9nx\" (UniqueName: \"kubernetes.io/projected/b010d460-72d6-4943-9230-8750e91ef21c-kube-api-access-vn9nx\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823496 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-var-lib-kubelet\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823512 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-daemon-config\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823538 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-system-cni-dir\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823556 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-run-netns\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823582 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-cnibin\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823599 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-os-release\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823621 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-var-lib-cni-bin\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823677 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b010d460-72d6-4943-9230-8750e91ef21c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823769 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-hostroot\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823799 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b010d460-72d6-4943-9230-8750e91ef21c-cni-binary-copy\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823827 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823869 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-cnibin\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823892 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n64p9\" (UniqueName: \"kubernetes.io/projected/3dbc56be-abfc-4180-870e-f4c19bd09f4b-kube-api-access-n64p9\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823910 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-system-cni-dir\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823925 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-conf-dir\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823950 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c8409df-def9-46a0-a813-6788ddf1e292-proxy-tls\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.823967 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3dbc56be-abfc-4180-870e-f4c19bd09f4b-cni-binary-copy\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.824005 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-etc-kubernetes\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.829589 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.840316 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.853583 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.855830 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.855869 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.855880 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.855895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.855904 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.875351 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.891334 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.917328 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925163 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-cnibin\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925229 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-os-release\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925262 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67gmj\" (UniqueName: \"kubernetes.io/projected/83fdeccf-dd9f-4c93-bece-3382f3f4898f-kube-api-access-67gmj\") pod \"node-resolver-tlf2v\" (UID: \"83fdeccf-dd9f-4c93-bece-3382f3f4898f\") " pod="openshift-dns/node-resolver-tlf2v" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925289 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-var-lib-cni-bin\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925293 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-cnibin\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925316 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b010d460-72d6-4943-9230-8750e91ef21c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925341 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-cnibin\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925362 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-var-lib-cni-bin\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925366 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-hostroot\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925391 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b010d460-72d6-4943-9230-8750e91ef21c-cni-binary-copy\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925417 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925447 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n64p9\" (UniqueName: \"kubernetes.io/projected/3dbc56be-abfc-4180-870e-f4c19bd09f4b-kube-api-access-n64p9\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925441 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-hostroot\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925421 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-cnibin\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925514 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-system-cni-dir\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925468 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-os-release\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925473 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-system-cni-dir\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925675 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c8409df-def9-46a0-a813-6788ddf1e292-proxy-tls\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925722 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3dbc56be-abfc-4180-870e-f4c19bd09f4b-cni-binary-copy\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925741 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-conf-dir\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925803 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-etc-kubernetes\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925832 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-cni-dir\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925861 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-socket-dir-parent\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925891 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-var-lib-cni-multus\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925915 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c8409df-def9-46a0-a813-6788ddf1e292-rootfs\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925939 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d86n\" (UniqueName: \"kubernetes.io/projected/3c8409df-def9-46a0-a813-6788ddf1e292-kube-api-access-5d86n\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.925961 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c8409df-def9-46a0-a813-6788ddf1e292-mcd-auth-proxy-config\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926005 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-run-k8s-cni-cncf-io\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926032 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/83fdeccf-dd9f-4c93-bece-3382f3f4898f-hosts-file\") pod \"node-resolver-tlf2v\" (UID: \"83fdeccf-dd9f-4c93-bece-3382f3f4898f\") " pod="openshift-dns/node-resolver-tlf2v" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926050 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-os-release\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926074 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-run-multus-certs\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926080 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b010d460-72d6-4943-9230-8750e91ef21c-cni-binary-copy\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926095 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn9nx\" (UniqueName: \"kubernetes.io/projected/b010d460-72d6-4943-9230-8750e91ef21c-kube-api-access-vn9nx\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926132 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c8409df-def9-46a0-a813-6788ddf1e292-rootfs\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926135 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-system-cni-dir\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926159 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-run-netns\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926164 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b010d460-72d6-4943-9230-8750e91ef21c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926180 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-var-lib-kubelet\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926201 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-daemon-config\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926332 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-cni-dir\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926366 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-conf-dir\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926393 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-etc-kubernetes\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926429 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-socket-dir-parent\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926475 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-os-release\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926589 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3dbc56be-abfc-4180-870e-f4c19bd09f4b-cni-binary-copy\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926634 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-var-lib-cni-multus\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926692 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-system-cni-dir\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926733 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-run-multus-certs\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926778 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-run-netns\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926805 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-var-lib-kubelet\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.926810 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3dbc56be-abfc-4180-870e-f4c19bd09f4b-host-run-k8s-cni-cncf-io\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.927078 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3dbc56be-abfc-4180-870e-f4c19bd09f4b-multus-daemon-config\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.927332 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c8409df-def9-46a0-a813-6788ddf1e292-mcd-auth-proxy-config\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.927861 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b010d460-72d6-4943-9230-8750e91ef21c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.931381 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c8409df-def9-46a0-a813-6788ddf1e292-proxy-tls\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.948338 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.949937 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d86n\" (UniqueName: \"kubernetes.io/projected/3c8409df-def9-46a0-a813-6788ddf1e292-kube-api-access-5d86n\") pod \"machine-config-daemon-grzvj\" (UID: \"3c8409df-def9-46a0-a813-6788ddf1e292\") " pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.952322 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn9nx\" (UniqueName: \"kubernetes.io/projected/b010d460-72d6-4943-9230-8750e91ef21c-kube-api-access-vn9nx\") pod \"multus-additional-cni-plugins-gcj4t\" (UID: \"b010d460-72d6-4943-9230-8750e91ef21c\") " pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.954620 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n64p9\" (UniqueName: \"kubernetes.io/projected/3dbc56be-abfc-4180-870e-f4c19bd09f4b-kube-api-access-n64p9\") pod \"multus-dc6rd\" (UID: \"3dbc56be-abfc-4180-870e-f4c19bd09f4b\") " pod="openshift-multus/multus-dc6rd" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.957981 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.958017 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.958027 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.958043 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.958054 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:57Z","lastTransitionTime":"2026-02-04T11:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:57 crc kubenswrapper[4728]: I0204 11:27:57.993041 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:57Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.006069 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-04 11:22:56 +0000 UTC, rotation deadline is 2026-12-20 07:26:50.248452537 +0000 UTC Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.006134 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7651h58m52.242321025s for next certificate rotation Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.009037 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.022801 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.027476 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/83fdeccf-dd9f-4c93-bece-3382f3f4898f-hosts-file\") pod \"node-resolver-tlf2v\" (UID: \"83fdeccf-dd9f-4c93-bece-3382f3f4898f\") " pod="openshift-dns/node-resolver-tlf2v" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.027649 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/83fdeccf-dd9f-4c93-bece-3382f3f4898f-hosts-file\") pod \"node-resolver-tlf2v\" (UID: \"83fdeccf-dd9f-4c93-bece-3382f3f4898f\") " pod="openshift-dns/node-resolver-tlf2v" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.027831 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67gmj\" (UniqueName: \"kubernetes.io/projected/83fdeccf-dd9f-4c93-bece-3382f3f4898f-kube-api-access-67gmj\") pod \"node-resolver-tlf2v\" (UID: \"83fdeccf-dd9f-4c93-bece-3382f3f4898f\") " pod="openshift-dns/node-resolver-tlf2v" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.033651 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.045198 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67gmj\" (UniqueName: \"kubernetes.io/projected/83fdeccf-dd9f-4c93-bece-3382f3f4898f-kube-api-access-67gmj\") pod \"node-resolver-tlf2v\" (UID: \"83fdeccf-dd9f-4c93-bece-3382f3f4898f\") " pod="openshift-dns/node-resolver-tlf2v" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.045218 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.055303 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.060206 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dc6rd" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.060226 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.060256 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.060266 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.060281 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.060292 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:58Z","lastTransitionTime":"2026-02-04T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.068072 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.073126 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.074127 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:27:58 crc kubenswrapper[4728]: W0204 11:27:58.076960 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dbc56be_abfc_4180_870e_f4c19bd09f4b.slice/crio-6825783d139b820f46f485909b75f79a229c636760155a366a42c734f560d91e WatchSource:0}: Error finding container 6825783d139b820f46f485909b75f79a229c636760155a366a42c734f560d91e: Status 404 returned error can't find the container with id 6825783d139b820f46f485909b75f79a229c636760155a366a42c734f560d91e Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.083082 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tlf2v" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.095042 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.111318 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.118632 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c6r5d"] Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.120134 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.121735 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.121945 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.122626 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.123016 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.123230 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.123374 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.127176 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.134882 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.150342 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.170406 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.170434 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.170442 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.170454 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.170463 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:58Z","lastTransitionTime":"2026-02-04T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.171093 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.183192 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.194411 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.207945 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.221669 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.230742 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e963298-5c99-4db8-bdba-88187d4b0018-ovn-node-metrics-cert\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.230822 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-bin\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.230851 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-node-log\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.230875 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-openvswitch\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.230897 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-ovn-kubernetes\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.230918 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-kubelet\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.230939 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-var-lib-openvswitch\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.230958 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-log-socket\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.230977 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-netd\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.230999 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-config\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.231030 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-netns\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.231054 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp28q\" (UniqueName: \"kubernetes.io/projected/0e963298-5c99-4db8-bdba-88187d4b0018-kube-api-access-tp28q\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.231078 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-script-lib\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.231100 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-env-overrides\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.231122 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-ovn\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.231145 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-slash\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.231165 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-etc-openvswitch\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.231198 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-systemd-units\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.231221 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-systemd\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.231267 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.237547 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.250764 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.264446 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.276717 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.276824 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.276870 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.276895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.276909 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:58Z","lastTransitionTime":"2026-02-04T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.283956 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.297509 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.306297 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331641 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-openvswitch\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331676 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-ovn-kubernetes\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331697 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-kubelet\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331714 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-var-lib-openvswitch\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331717 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-openvswitch\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331763 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-ovn-kubernetes\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331766 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-log-socket\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331787 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-var-lib-openvswitch\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331729 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-log-socket\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331827 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-netd\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331848 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-config\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331837 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-kubelet\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331881 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp28q\" (UniqueName: \"kubernetes.io/projected/0e963298-5c99-4db8-bdba-88187d4b0018-kube-api-access-tp28q\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331904 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-netns\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331928 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-script-lib\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331926 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-netd\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331991 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-netns\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.331952 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-env-overrides\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332033 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-ovn\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332050 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-slash\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332066 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-etc-openvswitch\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332087 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-systemd\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332104 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332134 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-systemd-units\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332165 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e963298-5c99-4db8-bdba-88187d4b0018-ovn-node-metrics-cert\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332187 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-bin\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332196 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-etc-openvswitch\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332219 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-node-log\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332242 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-ovn\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332275 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-systemd\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332278 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-node-log\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332300 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-slash\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332309 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332335 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-bin\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332358 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-systemd-units\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332608 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-config\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332621 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-env-overrides\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.332903 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-script-lib\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.335667 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e963298-5c99-4db8-bdba-88187d4b0018-ovn-node-metrics-cert\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.347300 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp28q\" (UniqueName: \"kubernetes.io/projected/0e963298-5c99-4db8-bdba-88187d4b0018-kube-api-access-tp28q\") pod \"ovnkube-node-c6r5d\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.380300 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.380338 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.380350 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.380365 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.380374 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:58Z","lastTransitionTime":"2026-02-04T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.483177 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.483246 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.483265 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.483290 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.483302 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:58Z","lastTransitionTime":"2026-02-04T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.509572 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:13:40.836383272 +0000 UTC Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.538091 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.586886 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.586921 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.586931 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.586946 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.586956 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:58Z","lastTransitionTime":"2026-02-04T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.690588 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.690627 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.690639 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.690657 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.690671 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:58Z","lastTransitionTime":"2026-02-04T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.707525 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.707584 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.707600 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"11c570621833a66893a4cc74e7a1d4d9ce24eabc4823107fdc48ee4257a333d5"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.709631 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.709674 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"1ce83136c79a115e349d962f70ab2bb323d5948a29c94ddcd9ec8910ba9aa545"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.711774 4728 generic.go:334] "Generic (PLEG): container finished" podID="b010d460-72d6-4943-9230-8750e91ef21c" containerID="52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb" exitCode=0 Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.711825 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" event={"ID":"b010d460-72d6-4943-9230-8750e91ef21c","Type":"ContainerDied","Data":"52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.711885 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" event={"ID":"b010d460-72d6-4943-9230-8750e91ef21c","Type":"ContainerStarted","Data":"f0b354b19b6244c18825f5751870ea43971c5390dbaf7fb40794a5ac755ed31d"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.714467 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dc6rd" event={"ID":"3dbc56be-abfc-4180-870e-f4c19bd09f4b","Type":"ContainerStarted","Data":"cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.714516 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dc6rd" event={"ID":"3dbc56be-abfc-4180-870e-f4c19bd09f4b","Type":"ContainerStarted","Data":"6825783d139b820f46f485909b75f79a229c636760155a366a42c734f560d91e"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.716410 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tlf2v" event={"ID":"83fdeccf-dd9f-4c93-bece-3382f3f4898f","Type":"ContainerStarted","Data":"d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.716449 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tlf2v" event={"ID":"83fdeccf-dd9f-4c93-bece-3382f3f4898f","Type":"ContainerStarted","Data":"11991598b82794854c8e581f7b4e7f4de3ca361eb12fa4060df36ea1afb29d42"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.723259 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.739505 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.756465 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.770865 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.784058 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.793551 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.793598 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.793611 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.793627 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.793637 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:58Z","lastTransitionTime":"2026-02-04T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.800485 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.814305 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.828484 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.847349 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.863549 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.878940 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.894665 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.896398 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.896451 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.896480 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.896503 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.896521 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:58Z","lastTransitionTime":"2026-02-04T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.920116 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.934993 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.946496 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.964134 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.975633 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.986286 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.997582 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:58Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.999368 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.999419 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.999436 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.999455 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:58 crc kubenswrapper[4728]: I0204 11:27:58.999468 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:58Z","lastTransitionTime":"2026-02-04T11:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.009642 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.024049 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.039369 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.055201 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.069387 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.084567 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.102061 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.102134 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.102147 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.102165 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.102178 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:59Z","lastTransitionTime":"2026-02-04T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.109336 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.204402 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.204429 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.204437 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.204450 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.204459 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:59Z","lastTransitionTime":"2026-02-04T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.241360 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.241469 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.241505 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.241524 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.241630 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.241636 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.241662 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.241725 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:07.241699904 +0000 UTC m=+36.384404309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.241647 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.241779 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:07.241741915 +0000 UTC m=+36.384446310 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.241804 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:28:07.241791916 +0000 UTC m=+36.384496311 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.241831 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.241904 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:07.241881868 +0000 UTC m=+36.384586403 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.307314 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.307356 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.307368 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.307391 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.307406 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:59Z","lastTransitionTime":"2026-02-04T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.342926 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.343113 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.343141 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.343153 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.343211 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:07.343192235 +0000 UTC m=+36.485896690 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.409953 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.409996 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.410008 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.410026 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.410042 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:59Z","lastTransitionTime":"2026-02-04T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.510649 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:49:27.116318956 +0000 UTC Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.512211 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.512258 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.512270 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.512289 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.512303 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:59Z","lastTransitionTime":"2026-02-04T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.553283 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.553332 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.553347 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.553472 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.553617 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:27:59 crc kubenswrapper[4728]: E0204 11:27:59.553776 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.614024 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.614382 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.614395 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.614410 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.614421 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:59Z","lastTransitionTime":"2026-02-04T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.717227 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.717259 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.717270 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.717284 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.717295 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:59Z","lastTransitionTime":"2026-02-04T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.721060 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f" exitCode=0 Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.721124 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.723100 4728 generic.go:334] "Generic (PLEG): container finished" podID="b010d460-72d6-4943-9230-8750e91ef21c" containerID="d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624" exitCode=0 Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.723174 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" event={"ID":"b010d460-72d6-4943-9230-8750e91ef21c","Type":"ContainerDied","Data":"d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.735428 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.748050 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.769276 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.781162 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.806513 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.818330 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.819858 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.819898 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.819907 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.819921 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.819931 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:59Z","lastTransitionTime":"2026-02-04T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.831634 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.843215 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.855891 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.868390 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.884122 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.897502 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.912296 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.922372 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.922424 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.922437 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.922455 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.922466 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:27:59Z","lastTransitionTime":"2026-02-04T11:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.926347 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.939641 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.953107 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.967843 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:27:59 crc kubenswrapper[4728]: I0204 11:27:59.994307 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:27:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.019619 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.024165 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.024195 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.024203 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.024216 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.024224 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:00Z","lastTransitionTime":"2026-02-04T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.040200 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.063487 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.075025 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.088339 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.100448 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.112255 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.123307 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.127348 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.127377 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.127389 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.127405 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.127415 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:00Z","lastTransitionTime":"2026-02-04T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.229245 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.229270 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.229280 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.229292 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.229304 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:00Z","lastTransitionTime":"2026-02-04T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.331908 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.332372 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.332392 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.332418 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.332431 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:00Z","lastTransitionTime":"2026-02-04T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.358010 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hxdks"] Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.358395 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hxdks" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.360548 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.360568 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.360732 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.361160 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.374518 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.387368 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.411100 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.425778 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.434622 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.434655 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.434664 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.434678 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.434687 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:00Z","lastTransitionTime":"2026-02-04T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.440818 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.452563 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0-host\") pod \"node-ca-hxdks\" (UID: \"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\") " pod="openshift-image-registry/node-ca-hxdks" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.452599 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl8sk\" (UniqueName: \"kubernetes.io/projected/1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0-kube-api-access-dl8sk\") pod \"node-ca-hxdks\" (UID: \"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\") " pod="openshift-image-registry/node-ca-hxdks" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.452617 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0-serviceca\") pod \"node-ca-hxdks\" (UID: \"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\") " pod="openshift-image-registry/node-ca-hxdks" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.457932 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.472062 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.487008 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.498850 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.511857 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:45:39.273772902 +0000 UTC Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.512356 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.525922 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.537090 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.537135 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.537148 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.537166 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.537180 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:00Z","lastTransitionTime":"2026-02-04T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.542110 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.553356 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0-host\") pod \"node-ca-hxdks\" (UID: \"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\") " pod="openshift-image-registry/node-ca-hxdks" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.553410 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl8sk\" (UniqueName: \"kubernetes.io/projected/1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0-kube-api-access-dl8sk\") pod \"node-ca-hxdks\" (UID: \"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\") " pod="openshift-image-registry/node-ca-hxdks" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.553439 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0-serviceca\") pod \"node-ca-hxdks\" (UID: \"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\") " pod="openshift-image-registry/node-ca-hxdks" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.553485 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0-host\") pod \"node-ca-hxdks\" (UID: \"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\") " pod="openshift-image-registry/node-ca-hxdks" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.554515 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0-serviceca\") pod \"node-ca-hxdks\" (UID: \"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\") " pod="openshift-image-registry/node-ca-hxdks" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.557847 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.573776 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.577480 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl8sk\" (UniqueName: \"kubernetes.io/projected/1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0-kube-api-access-dl8sk\") pod \"node-ca-hxdks\" (UID: \"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\") " pod="openshift-image-registry/node-ca-hxdks" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.640537 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.640607 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.640632 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.640664 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.640689 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:00Z","lastTransitionTime":"2026-02-04T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.670134 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hxdks" Feb 04 11:28:00 crc kubenswrapper[4728]: W0204 11:28:00.684924 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a1278ce_9dcb_4501_bb81_c0fa0f4fbbc0.slice/crio-9800ae9bf37cc924848d70f36012679ab8ff66b29bd5066c9505681105414673 WatchSource:0}: Error finding container 9800ae9bf37cc924848d70f36012679ab8ff66b29bd5066c9505681105414673: Status 404 returned error can't find the container with id 9800ae9bf37cc924848d70f36012679ab8ff66b29bd5066c9505681105414673 Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.732890 4728 generic.go:334] "Generic (PLEG): container finished" podID="b010d460-72d6-4943-9230-8750e91ef21c" containerID="a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f" exitCode=0 Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.732996 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" event={"ID":"b010d460-72d6-4943-9230-8750e91ef21c","Type":"ContainerDied","Data":"a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.734804 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hxdks" event={"ID":"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0","Type":"ContainerStarted","Data":"9800ae9bf37cc924848d70f36012679ab8ff66b29bd5066c9505681105414673"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.742363 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.742304 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.742413 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.742426 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.742442 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.742440 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.742452 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:00Z","lastTransitionTime":"2026-02-04T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.742469 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.742548 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.742581 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.742607 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.751781 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.769977 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.782911 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.794854 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.807086 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.831019 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.844281 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.844326 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.844339 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.844358 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.844371 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:00Z","lastTransitionTime":"2026-02-04T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.846993 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.860441 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.876064 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.887835 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.903385 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.916058 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.927109 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.946348 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.946395 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.946405 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.946421 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.946432 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:00Z","lastTransitionTime":"2026-02-04T11:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:00 crc kubenswrapper[4728]: I0204 11:28:00.958334 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:00Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.049145 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.049183 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.049195 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.049211 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.049223 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:01Z","lastTransitionTime":"2026-02-04T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.152260 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.152302 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.152313 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.152329 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.152342 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:01Z","lastTransitionTime":"2026-02-04T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.255686 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.255732 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.255742 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.255772 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.255781 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:01Z","lastTransitionTime":"2026-02-04T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.331016 4728 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 04 11:28:01 crc kubenswrapper[4728]: W0204 11:28:01.332694 4728 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Feb 04 11:28:01 crc kubenswrapper[4728]: W0204 11:28:01.332726 4728 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Feb 04 11:28:01 crc kubenswrapper[4728]: W0204 11:28:01.333140 4728 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 04 11:28:01 crc kubenswrapper[4728]: W0204 11:28:01.333150 4728 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.359045 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.359081 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.359089 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.359102 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.359111 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:01Z","lastTransitionTime":"2026-02-04T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.462048 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.462082 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.462094 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.462111 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.462127 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:01Z","lastTransitionTime":"2026-02-04T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.512610 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:59:08.532764588 +0000 UTC Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.553727 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.553841 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.553777 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:01 crc kubenswrapper[4728]: E0204 11:28:01.553989 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:01 crc kubenswrapper[4728]: E0204 11:28:01.554123 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:01 crc kubenswrapper[4728]: E0204 11:28:01.554262 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.567988 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.568393 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.568557 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.568700 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.568856 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:01Z","lastTransitionTime":"2026-02-04T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.571254 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.587523 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.604664 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.617918 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.639180 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.652873 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.665990 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.670972 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.671018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.671035 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.671056 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.671071 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:01Z","lastTransitionTime":"2026-02-04T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.688829 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.702693 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.713569 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.729844 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.740739 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.748391 4728 generic.go:334] "Generic (PLEG): container finished" podID="b010d460-72d6-4943-9230-8750e91ef21c" containerID="8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f" exitCode=0 Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.748455 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" event={"ID":"b010d460-72d6-4943-9230-8750e91ef21c","Type":"ContainerDied","Data":"8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.751830 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hxdks" event={"ID":"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0","Type":"ContainerStarted","Data":"e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.756626 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.768141 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.773059 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.773092 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.773100 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.773113 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.773121 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:01Z","lastTransitionTime":"2026-02-04T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.780710 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.791658 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.801224 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.809241 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.821410 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.833661 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.844847 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.856736 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.875601 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.875657 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.875961 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.875979 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.875993 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:01Z","lastTransitionTime":"2026-02-04T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.878422 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.923072 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.960162 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.977686 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.977961 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.977974 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.977988 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.977998 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:01Z","lastTransitionTime":"2026-02-04T11:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:01 crc kubenswrapper[4728]: I0204 11:28:01.999046 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.038686 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.076932 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.080220 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.080246 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.080255 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.080267 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.080275 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:02Z","lastTransitionTime":"2026-02-04T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.182574 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.182605 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.182613 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.182626 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.182635 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:02Z","lastTransitionTime":"2026-02-04T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.285193 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.285229 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.285240 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.285255 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.285267 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:02Z","lastTransitionTime":"2026-02-04T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.387789 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.387831 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.387840 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.387853 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.387863 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:02Z","lastTransitionTime":"2026-02-04T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.455837 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.489860 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.489904 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.489915 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.489932 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.489943 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:02Z","lastTransitionTime":"2026-02-04T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.502009 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.513829 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:29:31.522745598 +0000 UTC Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.580160 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.591993 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.592043 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.592056 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.592075 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.592089 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:02Z","lastTransitionTime":"2026-02-04T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.694095 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.694133 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.694146 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.694161 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.694173 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:02Z","lastTransitionTime":"2026-02-04T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.757645 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931"} Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.761258 4728 generic.go:334] "Generic (PLEG): container finished" podID="b010d460-72d6-4943-9230-8750e91ef21c" containerID="0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230" exitCode=0 Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.761358 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" event={"ID":"b010d460-72d6-4943-9230-8750e91ef21c","Type":"ContainerDied","Data":"0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230"} Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.773882 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.783594 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.795191 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.796206 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.796245 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.796252 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.796267 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.796276 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:02Z","lastTransitionTime":"2026-02-04T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.806912 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.817743 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.828378 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.841487 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.853351 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.864032 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.875661 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.889651 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.898098 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.898147 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.898158 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.898175 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.898186 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:02Z","lastTransitionTime":"2026-02-04T11:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.898999 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.902509 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.915575 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:02 crc kubenswrapper[4728]: I0204 11:28:02.935121 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:02Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.000182 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.000215 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.000224 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.000238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.000248 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:03Z","lastTransitionTime":"2026-02-04T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.102699 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.103023 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.103045 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.103059 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.103072 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:03Z","lastTransitionTime":"2026-02-04T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.204959 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.205004 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.205018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.205037 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.205052 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:03Z","lastTransitionTime":"2026-02-04T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.309253 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.309300 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.309317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.309334 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.309345 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:03Z","lastTransitionTime":"2026-02-04T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.411659 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.411697 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.411708 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.411723 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.411735 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:03Z","lastTransitionTime":"2026-02-04T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.513952 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:17:09.250348501 +0000 UTC Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.514305 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.514340 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.514351 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.514369 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.514381 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:03Z","lastTransitionTime":"2026-02-04T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.553689 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.553711 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.553780 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:03 crc kubenswrapper[4728]: E0204 11:28:03.553871 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:03 crc kubenswrapper[4728]: E0204 11:28:03.553988 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:03 crc kubenswrapper[4728]: E0204 11:28:03.554079 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.617280 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.617312 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.617321 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.617336 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.617347 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:03Z","lastTransitionTime":"2026-02-04T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.719174 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.719206 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.719216 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.719229 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.719239 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:03Z","lastTransitionTime":"2026-02-04T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.767632 4728 generic.go:334] "Generic (PLEG): container finished" podID="b010d460-72d6-4943-9230-8750e91ef21c" containerID="8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6" exitCode=0 Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.767684 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" event={"ID":"b010d460-72d6-4943-9230-8750e91ef21c","Type":"ContainerDied","Data":"8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6"} Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.780237 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.793691 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.814079 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.822047 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.822102 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.822120 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.822144 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.822160 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:03Z","lastTransitionTime":"2026-02-04T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.830370 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.841352 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.855240 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.867171 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.877895 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.887960 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.901811 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.913545 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.924148 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.924178 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.924190 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.924205 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.924232 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:03Z","lastTransitionTime":"2026-02-04T11:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.924716 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.937636 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:03 crc kubenswrapper[4728]: I0204 11:28:03.950514 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:03Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.026427 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.026459 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.026467 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.026480 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.026489 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:04Z","lastTransitionTime":"2026-02-04T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.128553 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.128587 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.128602 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.128616 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.128626 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:04Z","lastTransitionTime":"2026-02-04T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.230977 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.231013 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.231024 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.231040 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.231053 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:04Z","lastTransitionTime":"2026-02-04T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.333915 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.333955 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.333964 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.333980 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.333991 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:04Z","lastTransitionTime":"2026-02-04T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.436697 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.436788 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.436813 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.436840 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.436861 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:04Z","lastTransitionTime":"2026-02-04T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.514384 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:46:42.831756067 +0000 UTC Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.539560 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.539607 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.539618 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.539634 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.539652 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:04Z","lastTransitionTime":"2026-02-04T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.645619 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.646010 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.646026 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.646040 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.646050 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:04Z","lastTransitionTime":"2026-02-04T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.748165 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.748200 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.748208 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.748222 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.748232 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:04Z","lastTransitionTime":"2026-02-04T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.775816 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.776137 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.780435 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" event={"ID":"b010d460-72d6-4943-9230-8750e91ef21c","Type":"ContainerStarted","Data":"d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.795911 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.804438 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.807833 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.821491 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.831212 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.847721 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.850457 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.850502 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.850514 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.850531 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.850546 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:04Z","lastTransitionTime":"2026-02-04T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.866832 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.879143 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.890823 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.903328 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.921964 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.934615 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.945190 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.952422 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.952458 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.952467 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.952479 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.952487 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:04Z","lastTransitionTime":"2026-02-04T11:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.956936 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.965881 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.977358 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:04 crc kubenswrapper[4728]: I0204 11:28:04.986737 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.000411 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:04Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.012044 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.026631 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.039939 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.053368 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.054584 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.054620 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.054629 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.054641 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.054650 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:05Z","lastTransitionTime":"2026-02-04T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.070768 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.081573 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.090258 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.102667 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.113444 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.122549 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.130626 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.157189 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.157227 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.157235 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.157251 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.157260 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:05Z","lastTransitionTime":"2026-02-04T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.259611 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.259658 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.259671 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.259689 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.259703 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:05Z","lastTransitionTime":"2026-02-04T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.361575 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.361609 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.361622 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.361638 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.361650 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:05Z","lastTransitionTime":"2026-02-04T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.465551 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.465655 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.465669 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.465690 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.465743 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:05Z","lastTransitionTime":"2026-02-04T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.514889 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:10:40.895308533 +0000 UTC Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.553343 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.553350 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:05 crc kubenswrapper[4728]: E0204 11:28:05.553532 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.553482 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:05 crc kubenswrapper[4728]: E0204 11:28:05.553730 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:05 crc kubenswrapper[4728]: E0204 11:28:05.553879 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.568979 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.569024 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.569035 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.569049 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.569065 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:05Z","lastTransitionTime":"2026-02-04T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.671437 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.671485 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.671498 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.671515 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.671529 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:05Z","lastTransitionTime":"2026-02-04T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.773970 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.774021 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.774036 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.774055 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.774068 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:05Z","lastTransitionTime":"2026-02-04T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.783708 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.784281 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.837358 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.853457 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.869297 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.876522 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.876555 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.876563 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.876575 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.876583 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:05Z","lastTransitionTime":"2026-02-04T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.890722 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.903951 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.915473 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.928489 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.939690 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.949849 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.958118 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.968222 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.978421 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.978853 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.978887 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.978895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.978909 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.978918 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:05Z","lastTransitionTime":"2026-02-04T11:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.988297 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:05 crc kubenswrapper[4728]: I0204 11:28:05.999315 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:05Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.010450 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.081456 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.081509 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.081533 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.081555 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.081572 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:06Z","lastTransitionTime":"2026-02-04T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.183656 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.183707 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.183720 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.183744 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.183788 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:06Z","lastTransitionTime":"2026-02-04T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.286611 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.286651 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.286664 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.286679 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.286691 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:06Z","lastTransitionTime":"2026-02-04T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.388689 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.388742 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.388791 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.388811 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.388826 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:06Z","lastTransitionTime":"2026-02-04T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.491261 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.491317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.491333 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.491375 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.491393 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:06Z","lastTransitionTime":"2026-02-04T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.515799 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:56:14.326119577 +0000 UTC Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.553567 4728 scope.go:117] "RemoveContainer" containerID="670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.593740 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.593876 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.593898 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.593919 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.593934 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:06Z","lastTransitionTime":"2026-02-04T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.696463 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.696495 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.696504 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.696516 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.696525 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:06Z","lastTransitionTime":"2026-02-04T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.787609 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.788863 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822"} Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.788901 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.798488 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.798527 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.798540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.798556 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.798569 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:06Z","lastTransitionTime":"2026-02-04T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.807796 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.817438 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.831392 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.844068 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.854416 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.863631 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.875636 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.900295 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.900328 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.900340 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.900354 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.900365 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:06Z","lastTransitionTime":"2026-02-04T11:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.916913 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.930642 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.942524 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.955698 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.969291 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.980393 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:06 crc kubenswrapper[4728]: I0204 11:28:06.997251 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:06Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.002720 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.002743 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.002764 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.002777 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.002785 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.107427 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.107454 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.107462 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.107474 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.107483 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.209939 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.210328 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.210343 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.210361 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.210376 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.262717 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.312360 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.312386 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.312394 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.312406 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.312414 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.320273 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.320342 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.320376 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.320396 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.320413 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:28:23.320393693 +0000 UTC m=+52.463098078 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.320544 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.320545 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.320583 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:23.320573517 +0000 UTC m=+52.463277902 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.320633 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:23.320608417 +0000 UTC m=+52.463312862 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.320823 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.320837 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.320848 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.320895 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:23.320883833 +0000 UTC m=+52.463588288 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.414248 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.414284 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.414295 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.414329 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.414341 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.421626 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.421791 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.421813 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.421825 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.421870 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:23.421856483 +0000 UTC m=+52.564560868 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.515925 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:09:59.370784424 +0000 UTC Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.516459 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.516491 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.516501 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.516514 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.516524 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.553517 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.553580 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.553630 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.553713 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.553585 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.553826 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.618184 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.618217 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.618227 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.618241 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.618254 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.631620 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.631660 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.631671 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.631688 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.631698 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.643108 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.649451 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.649493 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.649518 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.649547 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.649558 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.662307 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.665350 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.665388 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.665398 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.665413 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.665425 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.675604 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.678661 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.678690 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.678701 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.678720 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.678729 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.688660 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.692454 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.692492 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.692501 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.692515 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.692524 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.703710 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: E0204 11:28:07.703876 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.720635 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.720670 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.720680 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.720709 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.720726 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.742939 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.792638 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/0.log" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.795648 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4" exitCode=1 Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.795721 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4"} Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.796293 4728 scope.go:117] "RemoveContainer" containerID="fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.810405 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.823505 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.823544 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.823555 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.823570 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.823582 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.826962 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.840291 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.854866 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.870155 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.886788 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.899808 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.920309 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"1.EgressIP event handler 8 for removal\\\\nI0204 11:28:07.295264 6058 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:07.295276 6058 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:07.295291 6058 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:07.295299 6058 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:07.295312 6058 factory.go:656] Stopping watch factory\\\\nI0204 11:28:07.295327 6058 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0204 11:28:07.295326 6058 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:07.295335 6058 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11:28:07.295346 6058 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:07.295589 6058 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0204 11:28:07.295672 6058 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0204 11:28:07.295714 6058 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:07.295163 6058 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0204 11:28:07.295809 6058 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0204 11:28:07.295774 6058 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 11:28:07.295922 6058 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.925810 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.925841 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.925850 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.925863 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.925901 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:07Z","lastTransitionTime":"2026-02-04T11:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.931891 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.943974 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.959513 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.972652 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.983826 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:07 crc kubenswrapper[4728]: I0204 11:28:07.994225 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:07Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.027920 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.027961 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.027971 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.027986 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.027995 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:08Z","lastTransitionTime":"2026-02-04T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.130022 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.130044 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.130052 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.130063 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.130071 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:08Z","lastTransitionTime":"2026-02-04T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.232279 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.232317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.232340 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.232357 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.232370 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:08Z","lastTransitionTime":"2026-02-04T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.337848 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.337891 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.337901 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.337917 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.337926 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:08Z","lastTransitionTime":"2026-02-04T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.440389 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.440433 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.440443 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.440455 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.440463 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:08Z","lastTransitionTime":"2026-02-04T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.516344 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:59:33.386567971 +0000 UTC Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.543043 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.543082 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.543095 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.543113 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.543127 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:08Z","lastTransitionTime":"2026-02-04T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.645119 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.645151 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.645160 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.645172 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.645181 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:08Z","lastTransitionTime":"2026-02-04T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.747342 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.747608 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.747697 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.747800 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.747876 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:08Z","lastTransitionTime":"2026-02-04T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.801994 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/1.log" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.802554 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/0.log" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.805085 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c" exitCode=1 Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.805123 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c"} Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.805162 4728 scope.go:117] "RemoveContainer" containerID="fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.806256 4728 scope.go:117] "RemoveContainer" containerID="ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c" Feb 04 11:28:08 crc kubenswrapper[4728]: E0204 11:28:08.806482 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.819856 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.831482 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.844277 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw"] Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.844777 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.846874 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.847829 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.849704 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.849741 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.849768 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.849792 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.849802 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:08Z","lastTransitionTime":"2026-02-04T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.851694 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"1.EgressIP event handler 8 for removal\\\\nI0204 11:28:07.295264 6058 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:07.295276 6058 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:07.295291 6058 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:07.295299 6058 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:07.295312 6058 factory.go:656] Stopping watch factory\\\\nI0204 11:28:07.295327 6058 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0204 11:28:07.295326 6058 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:07.295335 6058 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11:28:07.295346 6058 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:07.295589 6058 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0204 11:28:07.295672 6058 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0204 11:28:07.295714 6058 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:07.295163 6058 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0204 11:28:07.295809 6058 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0204 11:28:07.295774 6058 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 11:28:07.295922 6058 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547382 6203 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547644 6203 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 11:28:08.548073 6203 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.548403 6203 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:08.548427 6203 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:08.548452 6203 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0204 11:28:08.548458 6203 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0204 11:28:08.548505 6203 factory.go:656] Stopping watch factory\\\\nI0204 11:28:08.548517 6203 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:08.548548 6203 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:08.548559 6203 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:08.548565 6203 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.870041 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.879306 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.890351 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.901124 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.912620 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.928066 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.951460 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.951492 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.951501 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.951515 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.951527 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:08Z","lastTransitionTime":"2026-02-04T11:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.959387 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:08 crc kubenswrapper[4728]: I0204 11:28:08.983563 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.000694 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:08Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.014558 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.030362 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.037375 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a52364b-5c09-4b77-95c3-7d9a7488afea-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.037438 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a52364b-5c09-4b77-95c3-7d9a7488afea-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.037490 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j2sr\" (UniqueName: \"kubernetes.io/projected/0a52364b-5c09-4b77-95c3-7d9a7488afea-kube-api-access-4j2sr\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.037526 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a52364b-5c09-4b77-95c3-7d9a7488afea-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.044606 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.054153 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.054188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.054203 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.054220 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.054231 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:09Z","lastTransitionTime":"2026-02-04T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.056371 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.067080 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.081705 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.093869 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.105116 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.115392 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.130898 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb31efe41c0900c9d19687f8c7873cad0bd54f5a168931caef8a8c420509a2b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:07Z\\\",\\\"message\\\":\\\"1.EgressIP event handler 8 for removal\\\\nI0204 11:28:07.295264 6058 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:07.295276 6058 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:07.295291 6058 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:07.295299 6058 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:07.295312 6058 factory.go:656] Stopping watch factory\\\\nI0204 11:28:07.295327 6058 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0204 11:28:07.295326 6058 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:07.295335 6058 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11:28:07.295346 6058 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:07.295589 6058 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0204 11:28:07.295672 6058 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0204 11:28:07.295714 6058 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:07.295163 6058 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0204 11:28:07.295809 6058 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0204 11:28:07.295774 6058 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0204 11:28:07.295922 6058 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547382 6203 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547644 6203 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 11:28:08.548073 6203 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.548403 6203 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:08.548427 6203 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:08.548452 6203 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0204 11:28:08.548458 6203 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0204 11:28:08.548505 6203 factory.go:656] Stopping watch factory\\\\nI0204 11:28:08.548517 6203 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:08.548548 6203 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:08.548559 6203 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:08.548565 6203 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.138133 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a52364b-5c09-4b77-95c3-7d9a7488afea-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.138163 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j2sr\" (UniqueName: \"kubernetes.io/projected/0a52364b-5c09-4b77-95c3-7d9a7488afea-kube-api-access-4j2sr\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.138183 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a52364b-5c09-4b77-95c3-7d9a7488afea-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.138230 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a52364b-5c09-4b77-95c3-7d9a7488afea-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.138736 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a52364b-5c09-4b77-95c3-7d9a7488afea-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.138898 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a52364b-5c09-4b77-95c3-7d9a7488afea-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.142448 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.143367 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a52364b-5c09-4b77-95c3-7d9a7488afea-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.152895 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.153717 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j2sr\" (UniqueName: \"kubernetes.io/projected/0a52364b-5c09-4b77-95c3-7d9a7488afea-kube-api-access-4j2sr\") pod \"ovnkube-control-plane-749d76644c-rm2jw\" (UID: \"0a52364b-5c09-4b77-95c3-7d9a7488afea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.158923 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.159269 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.159494 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.159603 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.159698 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.159845 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:09Z","lastTransitionTime":"2026-02-04T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:09 crc kubenswrapper[4728]: W0204 11:28:09.173618 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a52364b_5c09_4b77_95c3_7d9a7488afea.slice/crio-6e781b814e8c49fd4868accea2ea32e73dfdfb04b8dff04e3ab1b455062400e7 WatchSource:0}: Error finding container 6e781b814e8c49fd4868accea2ea32e73dfdfb04b8dff04e3ab1b455062400e7: Status 404 returned error can't find the container with id 6e781b814e8c49fd4868accea2ea32e73dfdfb04b8dff04e3ab1b455062400e7 Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.173794 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.184886 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.202294 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.222327 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.238489 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.262150 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.262386 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.262489 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.262569 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.262635 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:09Z","lastTransitionTime":"2026-02-04T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.365106 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.365137 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.365145 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.365158 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.365168 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:09Z","lastTransitionTime":"2026-02-04T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.468082 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.468124 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.468133 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.468146 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.468156 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:09Z","lastTransitionTime":"2026-02-04T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.517441 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:48:08.54681386 +0000 UTC Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.553265 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.553316 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:09 crc kubenswrapper[4728]: E0204 11:28:09.553393 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.553488 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:09 crc kubenswrapper[4728]: E0204 11:28:09.553483 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:09 crc kubenswrapper[4728]: E0204 11:28:09.553542 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.570139 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.570185 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.570197 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.570212 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.570224 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:09Z","lastTransitionTime":"2026-02-04T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.672770 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.672807 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.672817 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.672833 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.672842 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:09Z","lastTransitionTime":"2026-02-04T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.775258 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.775284 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.775293 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.775304 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.775313 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:09Z","lastTransitionTime":"2026-02-04T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.810068 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/1.log" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.814416 4728 scope.go:117] "RemoveContainer" containerID="ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c" Feb 04 11:28:09 crc kubenswrapper[4728]: E0204 11:28:09.814617 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.815062 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" event={"ID":"0a52364b-5c09-4b77-95c3-7d9a7488afea","Type":"ContainerStarted","Data":"dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff"} Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.815104 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" event={"ID":"0a52364b-5c09-4b77-95c3-7d9a7488afea","Type":"ContainerStarted","Data":"6e781b814e8c49fd4868accea2ea32e73dfdfb04b8dff04e3ab1b455062400e7"} Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.828709 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.841127 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.854141 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.865485 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.876917 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.879229 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.879267 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.879280 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.879300 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.879314 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:09Z","lastTransitionTime":"2026-02-04T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.891047 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.908451 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.923351 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.937615 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.952473 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.969074 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.981393 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.981441 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.981456 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.981477 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.981491 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:09Z","lastTransitionTime":"2026-02-04T11:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:09 crc kubenswrapper[4728]: I0204 11:28:09.983944 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.000406 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.018016 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.036808 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547382 6203 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547644 6203 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 11:28:08.548073 6203 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.548403 6203 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:08.548427 6203 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:08.548452 6203 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0204 11:28:08.548458 6203 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0204 11:28:08.548505 6203 factory.go:656] Stopping watch factory\\\\nI0204 11:28:08.548517 6203 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:08.548548 6203 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:08.548559 6203 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:08.548565 6203 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.083269 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.083313 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.083326 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.083341 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.083352 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:10Z","lastTransitionTime":"2026-02-04T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.186525 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.186564 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.186579 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.186604 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.186617 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:10Z","lastTransitionTime":"2026-02-04T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.289505 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.289540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.289550 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.289564 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.289575 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:10Z","lastTransitionTime":"2026-02-04T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.391837 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.391879 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.391891 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.391909 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.391920 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:10Z","lastTransitionTime":"2026-02-04T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.493919 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.493971 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.493986 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.494005 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.494019 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:10Z","lastTransitionTime":"2026-02-04T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.518250 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:13:21.382147898 +0000 UTC Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.595939 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.596245 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.596317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.596386 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.596442 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:10Z","lastTransitionTime":"2026-02-04T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.699523 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.699569 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.699581 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.699596 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.699606 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:10Z","lastTransitionTime":"2026-02-04T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.802566 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.802647 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.802679 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.802708 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.802778 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:10Z","lastTransitionTime":"2026-02-04T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.819833 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" event={"ID":"0a52364b-5c09-4b77-95c3-7d9a7488afea","Type":"ContainerStarted","Data":"416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7"} Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.831957 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.844527 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.856413 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.868632 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.878709 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.891923 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.904908 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.904952 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.904964 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.904977 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.904986 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:10Z","lastTransitionTime":"2026-02-04T11:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.905274 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.918516 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.934477 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.950393 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.963176 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.975088 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:10 crc kubenswrapper[4728]: I0204 11:28:10.987770 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:10Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.002002 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.007378 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.007431 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.007448 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.007470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.007485 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:11Z","lastTransitionTime":"2026-02-04T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.028458 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547382 6203 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547644 6203 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 11:28:08.548073 6203 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.548403 6203 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:08.548427 6203 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:08.548452 6203 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0204 11:28:08.548458 6203 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0204 11:28:08.548505 6203 factory.go:656] Stopping watch factory\\\\nI0204 11:28:08.548517 6203 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:08.548548 6203 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:08.548559 6203 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:08.548565 6203 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.109686 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.109736 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.109772 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.109796 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.109811 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:11Z","lastTransitionTime":"2026-02-04T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.212530 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.212593 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.212603 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.212622 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.212633 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:11Z","lastTransitionTime":"2026-02-04T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.314882 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.314946 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.314962 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.314985 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.314999 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:11Z","lastTransitionTime":"2026-02-04T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.417803 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.417877 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.417893 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.417910 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.418216 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:11Z","lastTransitionTime":"2026-02-04T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.450137 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-q6m9t"] Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.450849 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:11 crc kubenswrapper[4728]: E0204 11:28:11.450943 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.460763 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2pqp\" (UniqueName: \"kubernetes.io/projected/8fd2519d-be03-457c-b9d6-70862115f6a9-kube-api-access-t2pqp\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.460848 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.464274 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.482823 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.495775 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.508012 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.518122 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.518728 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:06:22.003646234 +0000 UTC Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.519806 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.519836 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.519846 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.519859 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.519867 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:11Z","lastTransitionTime":"2026-02-04T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.531092 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.542607 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.551932 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.553019 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:11 crc kubenswrapper[4728]: E0204 11:28:11.553112 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.553138 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:11 crc kubenswrapper[4728]: E0204 11:28:11.553222 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.553316 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:11 crc kubenswrapper[4728]: E0204 11:28:11.553558 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.561973 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.562034 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2pqp\" (UniqueName: \"kubernetes.io/projected/8fd2519d-be03-457c-b9d6-70862115f6a9-kube-api-access-t2pqp\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:11 crc kubenswrapper[4728]: E0204 11:28:11.562375 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:11 crc kubenswrapper[4728]: E0204 11:28:11.562505 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs podName:8fd2519d-be03-457c-b9d6-70862115f6a9 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:12.062487694 +0000 UTC m=+41.205192079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs") pod "network-metrics-daemon-q6m9t" (UID: "8fd2519d-be03-457c-b9d6-70862115f6a9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.565205 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.576340 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.581428 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2pqp\" (UniqueName: \"kubernetes.io/projected/8fd2519d-be03-457c-b9d6-70862115f6a9-kube-api-access-t2pqp\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.599263 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547382 6203 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547644 6203 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 11:28:08.548073 6203 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.548403 6203 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:08.548427 6203 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:08.548452 6203 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0204 11:28:08.548458 6203 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0204 11:28:08.548505 6203 factory.go:656] Stopping watch factory\\\\nI0204 11:28:08.548517 6203 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:08.548548 6203 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:08.548559 6203 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:08.548565 6203 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.612159 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.622605 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.622650 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.622681 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.622704 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.622715 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:11Z","lastTransitionTime":"2026-02-04T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.625106 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.645022 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.655146 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.664095 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.675098 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.682692 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.690939 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.702930 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.713820 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.724615 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.724864 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.724925 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.725016 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.725041 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.725087 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:11Z","lastTransitionTime":"2026-02-04T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.733970 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.743697 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.759692 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.769194 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.781360 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.793371 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.806467 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.816826 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.827791 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.836440 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.836470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.836482 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.836499 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.836509 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:11Z","lastTransitionTime":"2026-02-04T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.843415 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547382 6203 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547644 6203 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 11:28:08.548073 6203 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.548403 6203 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:08.548427 6203 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:08.548452 6203 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0204 11:28:08.548458 6203 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0204 11:28:08.548505 6203 factory.go:656] Stopping watch factory\\\\nI0204 11:28:08.548517 6203 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:08.548548 6203 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:08.548559 6203 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:08.548565 6203 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.938990 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.939035 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.939046 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.939063 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:11 crc kubenswrapper[4728]: I0204 11:28:11.939074 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:11Z","lastTransitionTime":"2026-02-04T11:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.041673 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.041725 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.041736 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.041763 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.041774 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:12Z","lastTransitionTime":"2026-02-04T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.067362 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:12 crc kubenswrapper[4728]: E0204 11:28:12.067485 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:12 crc kubenswrapper[4728]: E0204 11:28:12.067566 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs podName:8fd2519d-be03-457c-b9d6-70862115f6a9 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:13.067545688 +0000 UTC m=+42.210250103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs") pod "network-metrics-daemon-q6m9t" (UID: "8fd2519d-be03-457c-b9d6-70862115f6a9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.144506 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.144541 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.144551 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.144564 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.144574 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:12Z","lastTransitionTime":"2026-02-04T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.247035 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.247118 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.247136 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.247155 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.247168 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:12Z","lastTransitionTime":"2026-02-04T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.349704 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.349805 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.349819 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.349836 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.349847 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:12Z","lastTransitionTime":"2026-02-04T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.452117 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.452161 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.452175 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.452195 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.452210 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:12Z","lastTransitionTime":"2026-02-04T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.519168 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:02:10.893897289 +0000 UTC Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.554500 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.554531 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.554539 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.554550 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.554560 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:12Z","lastTransitionTime":"2026-02-04T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.656138 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.656360 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.656427 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.656499 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.656555 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:12Z","lastTransitionTime":"2026-02-04T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.759456 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.759857 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.759973 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.760073 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.760161 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:12Z","lastTransitionTime":"2026-02-04T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.862992 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.863041 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.863055 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.863070 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.863081 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:12Z","lastTransitionTime":"2026-02-04T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.965377 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.965412 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.965421 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.965435 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:12 crc kubenswrapper[4728]: I0204 11:28:12.965443 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:12Z","lastTransitionTime":"2026-02-04T11:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.067418 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.067668 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.067730 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.067837 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.067921 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:13Z","lastTransitionTime":"2026-02-04T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.076339 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:13 crc kubenswrapper[4728]: E0204 11:28:13.076522 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:13 crc kubenswrapper[4728]: E0204 11:28:13.076599 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs podName:8fd2519d-be03-457c-b9d6-70862115f6a9 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:15.076580472 +0000 UTC m=+44.219284867 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs") pod "network-metrics-daemon-q6m9t" (UID: "8fd2519d-be03-457c-b9d6-70862115f6a9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.170516 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.170549 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.170557 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.170569 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.170577 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:13Z","lastTransitionTime":"2026-02-04T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.272222 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.272264 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.272275 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.272291 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.272304 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:13Z","lastTransitionTime":"2026-02-04T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.375077 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.375276 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.375404 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.375517 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.375732 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:13Z","lastTransitionTime":"2026-02-04T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.478009 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.478282 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.478378 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.478462 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.478547 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:13Z","lastTransitionTime":"2026-02-04T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.519526 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:20:57.558888815 +0000 UTC Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.552967 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.553016 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.553050 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.552984 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:13 crc kubenswrapper[4728]: E0204 11:28:13.553116 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:13 crc kubenswrapper[4728]: E0204 11:28:13.553181 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:13 crc kubenswrapper[4728]: E0204 11:28:13.553281 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:13 crc kubenswrapper[4728]: E0204 11:28:13.553369 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.581010 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.581084 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.581101 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.581126 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.581140 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:13Z","lastTransitionTime":"2026-02-04T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.683848 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.683898 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.683910 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.683927 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.683951 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:13Z","lastTransitionTime":"2026-02-04T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.786096 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.786133 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.786141 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.786157 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.786166 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:13Z","lastTransitionTime":"2026-02-04T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.888588 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.888635 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.888645 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.888662 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.888672 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:13Z","lastTransitionTime":"2026-02-04T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.990938 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.991204 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.991298 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.991418 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:13 crc kubenswrapper[4728]: I0204 11:28:13.991510 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:13Z","lastTransitionTime":"2026-02-04T11:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.094088 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.094140 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.094157 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.094180 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.094203 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:14Z","lastTransitionTime":"2026-02-04T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.196063 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.196318 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.196417 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.196478 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.196539 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:14Z","lastTransitionTime":"2026-02-04T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.298582 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.299056 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.299068 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.299080 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.299089 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:14Z","lastTransitionTime":"2026-02-04T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.401151 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.401199 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.401214 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.401235 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.401250 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:14Z","lastTransitionTime":"2026-02-04T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.503111 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.503143 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.503151 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.503163 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.503171 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:14Z","lastTransitionTime":"2026-02-04T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.519895 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:23:00.555500236 +0000 UTC Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.607284 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.607353 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.607369 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.607391 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.607404 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:14Z","lastTransitionTime":"2026-02-04T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.709573 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.709629 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.709640 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.709653 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.709663 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:14Z","lastTransitionTime":"2026-02-04T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.811561 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.811600 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.811619 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.811636 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.811647 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:14Z","lastTransitionTime":"2026-02-04T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.913642 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.913684 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.913692 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.913706 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:14 crc kubenswrapper[4728]: I0204 11:28:14.913716 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:14Z","lastTransitionTime":"2026-02-04T11:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.016175 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.016226 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.016238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.016254 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.016265 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:15Z","lastTransitionTime":"2026-02-04T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.093026 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:15 crc kubenswrapper[4728]: E0204 11:28:15.093196 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:15 crc kubenswrapper[4728]: E0204 11:28:15.093265 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs podName:8fd2519d-be03-457c-b9d6-70862115f6a9 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:19.093247378 +0000 UTC m=+48.235951763 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs") pod "network-metrics-daemon-q6m9t" (UID: "8fd2519d-be03-457c-b9d6-70862115f6a9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.118120 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.118173 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.118184 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.118203 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.118212 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:15Z","lastTransitionTime":"2026-02-04T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.220463 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.220501 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.220511 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.220524 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.220532 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:15Z","lastTransitionTime":"2026-02-04T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.322441 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.322469 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.322477 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.322489 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.322497 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:15Z","lastTransitionTime":"2026-02-04T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.424843 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.424896 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.424904 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.424915 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.424924 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:15Z","lastTransitionTime":"2026-02-04T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.520019 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:01:55.388391835 +0000 UTC Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.526842 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.526894 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.526907 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.527121 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.527133 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:15Z","lastTransitionTime":"2026-02-04T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.553388 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:15 crc kubenswrapper[4728]: E0204 11:28:15.553730 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.553458 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:15 crc kubenswrapper[4728]: E0204 11:28:15.553985 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.553402 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:15 crc kubenswrapper[4728]: E0204 11:28:15.554198 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.553545 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:15 crc kubenswrapper[4728]: E0204 11:28:15.554374 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.629378 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.629669 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.629807 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.629923 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.630051 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:15Z","lastTransitionTime":"2026-02-04T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.732630 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.732666 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.732675 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.732692 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.732701 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:15Z","lastTransitionTime":"2026-02-04T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.834397 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.834437 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.834446 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.834460 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.834470 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:15Z","lastTransitionTime":"2026-02-04T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.936735 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.936793 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.936803 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.936816 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:15 crc kubenswrapper[4728]: I0204 11:28:15.936824 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:15Z","lastTransitionTime":"2026-02-04T11:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.038667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.038700 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.038708 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.038720 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.038729 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:16Z","lastTransitionTime":"2026-02-04T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.140431 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.140464 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.140501 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.140521 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.140531 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:16Z","lastTransitionTime":"2026-02-04T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.242391 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.242459 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.242476 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.242499 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.242517 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:16Z","lastTransitionTime":"2026-02-04T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.344370 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.344430 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.344443 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.344458 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.344471 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:16Z","lastTransitionTime":"2026-02-04T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.446710 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.446777 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.446787 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.446801 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.446810 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:16Z","lastTransitionTime":"2026-02-04T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.521126 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:56:09.829637306 +0000 UTC Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.548667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.548709 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.548719 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.548732 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.548741 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:16Z","lastTransitionTime":"2026-02-04T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.651507 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.651567 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.651586 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.651611 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.651629 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:16Z","lastTransitionTime":"2026-02-04T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.755035 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.755065 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.755073 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.755088 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.755102 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:16Z","lastTransitionTime":"2026-02-04T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.857523 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.857598 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.857621 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.857643 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.857660 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:16Z","lastTransitionTime":"2026-02-04T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.960364 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.960409 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.960422 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.960440 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:16 crc kubenswrapper[4728]: I0204 11:28:16.960452 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:16Z","lastTransitionTime":"2026-02-04T11:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.063017 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.063089 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.063105 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.063129 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.063144 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:17Z","lastTransitionTime":"2026-02-04T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.165140 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.165180 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.165188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.165203 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.165212 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:17Z","lastTransitionTime":"2026-02-04T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.266132 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.267501 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.267535 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.267545 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.267558 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.267569 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:17Z","lastTransitionTime":"2026-02-04T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.282057 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.293444 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.304189 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.317538 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.328201 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.338493 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.346728 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.355782 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.366793 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.369348 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.369374 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.369382 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.369394 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.369404 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:17Z","lastTransitionTime":"2026-02-04T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.378005 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.392265 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.404658 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.421923 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.435447 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.447262 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.465926 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547382 6203 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547644 6203 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 11:28:08.548073 6203 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.548403 6203 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:08.548427 6203 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:08.548452 6203 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0204 11:28:08.548458 6203 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0204 11:28:08.548505 6203 factory.go:656] Stopping watch factory\\\\nI0204 11:28:08.548517 6203 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:08.548548 6203 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:08.548559 6203 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:08.548565 6203 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.471129 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.471172 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.471184 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.471202 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.471215 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:17Z","lastTransitionTime":"2026-02-04T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.522036 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:22:58.027994064 +0000 UTC Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.553471 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.553544 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:17 crc kubenswrapper[4728]: E0204 11:28:17.553590 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.553639 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:17 crc kubenswrapper[4728]: E0204 11:28:17.553684 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.553478 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:17 crc kubenswrapper[4728]: E0204 11:28:17.553809 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:17 crc kubenswrapper[4728]: E0204 11:28:17.553914 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.572916 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.572952 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.572962 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.572976 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.572988 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:17Z","lastTransitionTime":"2026-02-04T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.675284 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.675329 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.675338 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.675356 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.675365 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:17Z","lastTransitionTime":"2026-02-04T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.777623 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.777902 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.778031 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.778161 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.778281 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:17Z","lastTransitionTime":"2026-02-04T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.880118 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.880181 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.880192 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.880208 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.880218 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:17Z","lastTransitionTime":"2026-02-04T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.978130 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.978197 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.978208 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.978245 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.978254 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:17Z","lastTransitionTime":"2026-02-04T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:17 crc kubenswrapper[4728]: E0204 11:28:17.992453 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:17Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.996535 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.996593 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.996609 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.996625 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:17 crc kubenswrapper[4728]: I0204 11:28:17.996636 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:17Z","lastTransitionTime":"2026-02-04T11:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: E0204 11:28:18.010045 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.014538 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.014610 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.014627 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.014642 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.014651 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: E0204 11:28:18.025784 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.029111 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.029150 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.029162 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.029178 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.029187 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: E0204 11:28:18.041676 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.044779 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.044815 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.044823 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.044837 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.044847 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: E0204 11:28:18.057172 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: E0204 11:28:18.057328 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.058711 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.058733 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.058742 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.058769 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.058779 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.162094 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.162423 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.162617 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.162860 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.163051 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.265439 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.265468 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.265477 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.265492 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.265500 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.368044 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.368300 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.368451 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.368526 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.368619 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.424858 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.433389 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.439347 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.454550 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.467270 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.470945 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.470989 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.470999 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.471014 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.471024 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.477889 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.487921 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.500067 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.510603 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.523203 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:13:32.969505006 +0000 UTC Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.524035 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.534967 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.577224 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.577274 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.577284 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.577300 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.577312 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.584814 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.599165 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.611353 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.627791 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547382 6203 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547644 6203 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 11:28:08.548073 6203 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.548403 6203 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:08.548427 6203 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:08.548452 6203 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0204 11:28:08.548458 6203 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0204 11:28:08.548505 6203 factory.go:656] Stopping watch factory\\\\nI0204 11:28:08.548517 6203 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:08.548548 6203 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:08.548559 6203 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:08.548565 6203 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.639650 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.647637 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.655658 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:18Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.680004 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.680031 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.680040 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.680051 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.680060 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.782034 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.782082 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.782098 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.782115 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.782126 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.884401 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.884788 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.884936 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.885120 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.885277 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.987919 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.987966 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.987974 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.987988 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:18 crc kubenswrapper[4728]: I0204 11:28:18.987997 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:18Z","lastTransitionTime":"2026-02-04T11:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.090118 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.090161 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.090172 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.090191 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.090203 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:19Z","lastTransitionTime":"2026-02-04T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.125617 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:19 crc kubenswrapper[4728]: E0204 11:28:19.125812 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:19 crc kubenswrapper[4728]: E0204 11:28:19.125867 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs podName:8fd2519d-be03-457c-b9d6-70862115f6a9 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:27.125850665 +0000 UTC m=+56.268555060 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs") pod "network-metrics-daemon-q6m9t" (UID: "8fd2519d-be03-457c-b9d6-70862115f6a9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.193056 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.193139 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.193159 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.193183 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.193197 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:19Z","lastTransitionTime":"2026-02-04T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.295483 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.295535 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.295547 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.295564 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.295579 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:19Z","lastTransitionTime":"2026-02-04T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.398249 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.398319 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.398338 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.398358 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.398372 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:19Z","lastTransitionTime":"2026-02-04T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.500470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.500507 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.500520 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.500536 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.500547 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:19Z","lastTransitionTime":"2026-02-04T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.524118 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:03:25.726838849 +0000 UTC Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.553001 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.553035 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.553043 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:19 crc kubenswrapper[4728]: E0204 11:28:19.553138 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.553179 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:19 crc kubenswrapper[4728]: E0204 11:28:19.553257 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:19 crc kubenswrapper[4728]: E0204 11:28:19.553384 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:19 crc kubenswrapper[4728]: E0204 11:28:19.553596 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.602900 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.602947 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.602960 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.602976 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.602989 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:19Z","lastTransitionTime":"2026-02-04T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.705819 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.705846 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.705854 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.705868 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.705877 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:19Z","lastTransitionTime":"2026-02-04T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.812436 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.812481 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.812491 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.812509 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.812521 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:19Z","lastTransitionTime":"2026-02-04T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.915565 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.915617 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.915631 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.915651 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:19 crc kubenswrapper[4728]: I0204 11:28:19.915667 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:19Z","lastTransitionTime":"2026-02-04T11:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.018803 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.018854 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.018868 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.018887 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.018900 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:20Z","lastTransitionTime":"2026-02-04T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.121172 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.121242 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.121255 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.121271 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.121281 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:20Z","lastTransitionTime":"2026-02-04T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.223275 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.223321 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.223334 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.223352 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.223362 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:20Z","lastTransitionTime":"2026-02-04T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.326142 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.326403 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.326415 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.326435 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.326448 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:20Z","lastTransitionTime":"2026-02-04T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.429607 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.429655 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.429667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.429684 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.429696 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:20Z","lastTransitionTime":"2026-02-04T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.524991 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:06:58.848966234 +0000 UTC Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.532540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.532609 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.532631 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.532661 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.532683 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:20Z","lastTransitionTime":"2026-02-04T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.554700 4728 scope.go:117] "RemoveContainer" containerID="ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.634292 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.634321 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.634330 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.634344 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.634353 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:20Z","lastTransitionTime":"2026-02-04T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.735864 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.735948 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.735964 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.735982 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.735995 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:20Z","lastTransitionTime":"2026-02-04T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.838474 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.838517 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.838529 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.838544 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.838558 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:20Z","lastTransitionTime":"2026-02-04T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.861430 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/1.log" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.864034 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.864421 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.881024 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:20Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.900572 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:20Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.914607 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:20Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.925402 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:20Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.935892 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:20Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.940482 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.940517 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.940528 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.940543 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.940557 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:20Z","lastTransitionTime":"2026-02-04T11:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.948195 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:20Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.960175 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:20Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.972208 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:20Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.983916 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:20Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:20 crc kubenswrapper[4728]: I0204 11:28:20.997148 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:20Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.008005 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.021526 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.035481 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.042437 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.042474 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.042485 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.042500 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.042511 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:21Z","lastTransitionTime":"2026-02-04T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.052191 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547382 6203 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547644 6203 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 11:28:08.548073 6203 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.548403 6203 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:08.548427 6203 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:08.548452 6203 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0204 11:28:08.548458 6203 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0204 11:28:08.548505 6203 factory.go:656] Stopping watch factory\\\\nI0204 11:28:08.548517 6203 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:08.548548 6203 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:08.548559 6203 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:08.548565 6203 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.064439 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.073970 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.083590 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.145396 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.145444 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.145457 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.145474 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.145486 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:21Z","lastTransitionTime":"2026-02-04T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.247047 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.247085 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.247095 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.247109 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.247120 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:21Z","lastTransitionTime":"2026-02-04T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.350017 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.350058 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.350071 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.350096 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.350108 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:21Z","lastTransitionTime":"2026-02-04T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.453234 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.453281 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.453291 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.453305 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.453313 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:21Z","lastTransitionTime":"2026-02-04T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.525991 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:44:38.617343842 +0000 UTC Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.552642 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.552668 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.552739 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.552803 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:21 crc kubenswrapper[4728]: E0204 11:28:21.552869 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:21 crc kubenswrapper[4728]: E0204 11:28:21.552985 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:21 crc kubenswrapper[4728]: E0204 11:28:21.553072 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:21 crc kubenswrapper[4728]: E0204 11:28:21.553119 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.556603 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.556644 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.556660 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.556679 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.556692 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:21Z","lastTransitionTime":"2026-02-04T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.568980 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.580462 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.593988 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.603854 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.615065 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.626090 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.641195 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.654124 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.658134 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.658183 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.658199 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.658222 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.658237 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:21Z","lastTransitionTime":"2026-02-04T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.665195 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.682525 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.693826 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.708155 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.720070 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.736244 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547382 6203 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547644 6203 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 11:28:08.548073 6203 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.548403 6203 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:08.548427 6203 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:08.548452 6203 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0204 11:28:08.548458 6203 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0204 11:28:08.548505 6203 factory.go:656] Stopping watch factory\\\\nI0204 11:28:08.548517 6203 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:08.548548 6203 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:08.548559 6203 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:08.548565 6203 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.751793 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.760015 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.760046 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.760056 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.760072 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.760083 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:21Z","lastTransitionTime":"2026-02-04T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.764118 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.775353 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.862079 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.862124 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.862144 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.862162 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.862173 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:21Z","lastTransitionTime":"2026-02-04T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.868110 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/2.log" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.868917 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/1.log" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.871168 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275" exitCode=1 Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.871206 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275"} Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.871236 4728 scope.go:117] "RemoveContainer" containerID="ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.871872 4728 scope.go:117] "RemoveContainer" containerID="e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275" Feb 04 11:28:21 crc kubenswrapper[4728]: E0204 11:28:21.872144 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.885018 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.897332 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.910445 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.921180 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.932363 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.944081 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.955938 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.965073 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.965314 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.965326 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.965343 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.965354 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:21Z","lastTransitionTime":"2026-02-04T11:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.969592 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.983155 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:21 crc kubenswrapper[4728]: I0204 11:28:21.999326 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:21Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.012565 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.025220 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.036588 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.055143 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0aa48508df23fa9795443b02fa84867f27017b007021cc01644669d7e0ed3c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547382 6203 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.547644 6203 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0204 11:28:08.548073 6203 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0204 11:28:08.548403 6203 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:08.548427 6203 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:08.548452 6203 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0204 11:28:08.548458 6203 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0204 11:28:08.548505 6203 factory.go:656] Stopping watch factory\\\\nI0204 11:28:08.548517 6203 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:08.548548 6203 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:08.548559 6203 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:08.548565 6203 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:21Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 11:28:21.266411 6417 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:21.266449 6417 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:21.266462 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 11:28:21.266472 6417 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 11:28:21.266492 6417 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:21.266507 6417 handler.go:208] Removed *v1.Node event handler 7\\\\nI0204 11:28:21.266497 6417 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:21.266525 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:21.266518 6417 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:21.266555 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:21.266570 6417 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:21.266616 6417 factory.go:656] Stopping watch factory\\\\nI0204 11:28:21.266639 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:21.266682 6417 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:21.266699 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.065834 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.068074 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.068290 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.068300 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.068313 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.068322 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:22Z","lastTransitionTime":"2026-02-04T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.076485 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.087625 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.170676 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.170726 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.170740 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.170776 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.170789 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:22Z","lastTransitionTime":"2026-02-04T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.273235 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.273271 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.273280 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.273294 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.273305 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:22Z","lastTransitionTime":"2026-02-04T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.375967 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.376020 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.376034 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.376055 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.376071 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:22Z","lastTransitionTime":"2026-02-04T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.478355 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.478401 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.478409 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.478424 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.478434 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:22Z","lastTransitionTime":"2026-02-04T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.526825 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:18:45.805685501 +0000 UTC Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.580181 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.580215 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.580223 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.580236 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.580245 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:22Z","lastTransitionTime":"2026-02-04T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.682618 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.682652 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.682661 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.682673 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.682682 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:22Z","lastTransitionTime":"2026-02-04T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.785907 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.785959 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.785972 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.785993 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.786006 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:22Z","lastTransitionTime":"2026-02-04T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.876990 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/2.log" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.882285 4728 scope.go:117] "RemoveContainer" containerID="e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275" Feb 04 11:28:22 crc kubenswrapper[4728]: E0204 11:28:22.882509 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.888567 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.888627 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.888647 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.888668 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.888684 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:22Z","lastTransitionTime":"2026-02-04T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.897450 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.913512 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.925859 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.947612 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:21Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 11:28:21.266411 6417 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:21.266449 6417 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:21.266462 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 11:28:21.266472 6417 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 11:28:21.266492 6417 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:21.266507 6417 handler.go:208] Removed *v1.Node event handler 7\\\\nI0204 11:28:21.266497 6417 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:21.266525 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:21.266518 6417 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:21.266555 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:21.266570 6417 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:21.266616 6417 factory.go:656] Stopping watch factory\\\\nI0204 11:28:21.266639 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:21.266682 6417 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:21.266699 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.961208 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.974494 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.986019 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:22Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.991020 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.991054 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.991064 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.991078 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:22 crc kubenswrapper[4728]: I0204 11:28:22.991089 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:22Z","lastTransitionTime":"2026-02-04T11:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.002474 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:23Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.018504 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:23Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.033365 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:23Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.047588 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:23Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.060726 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:23Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.074407 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:23Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.086251 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:23Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.092907 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.093108 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.093189 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.093257 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.093328 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:23Z","lastTransitionTime":"2026-02-04T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.100037 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:23Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.114356 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:23Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.129309 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:23Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.196565 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.196657 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.196684 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.196724 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.196802 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:23Z","lastTransitionTime":"2026-02-04T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.299617 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.299691 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.299714 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.299742 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.299825 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:23Z","lastTransitionTime":"2026-02-04T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.371517 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.371688 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.371738 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.371859 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.372013 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.372091 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:55.372067928 +0000 UTC m=+84.514772353 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.372379 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:28:55.372359455 +0000 UTC m=+84.515063880 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.372472 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.372547 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:55.372524688 +0000 UTC m=+84.515229113 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.372935 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.373103 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.373222 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.373437 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:55.373411228 +0000 UTC m=+84.516115653 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.402469 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.402772 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.402858 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.402975 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.403046 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:23Z","lastTransitionTime":"2026-02-04T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.472670 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.472857 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.473116 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.473190 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.473308 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:55.473291473 +0000 UTC m=+84.615995858 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.507102 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.507353 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.507416 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.507475 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.507528 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:23Z","lastTransitionTime":"2026-02-04T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.527991 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:30:26.408816136 +0000 UTC Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.553047 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.553153 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.553047 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.553047 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.553259 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.553350 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.553489 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:23 crc kubenswrapper[4728]: E0204 11:28:23.553780 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.611098 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.611134 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.611284 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.611311 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.611326 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:23Z","lastTransitionTime":"2026-02-04T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.713686 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.713731 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.713743 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.713787 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.713797 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:23Z","lastTransitionTime":"2026-02-04T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.815791 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.815856 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.815869 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.815889 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.815903 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:23Z","lastTransitionTime":"2026-02-04T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.918354 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.918408 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.918422 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.918442 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:23 crc kubenswrapper[4728]: I0204 11:28:23.918460 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:23Z","lastTransitionTime":"2026-02-04T11:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.021326 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.021403 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.021421 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.021447 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.021465 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:24Z","lastTransitionTime":"2026-02-04T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.123791 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.123861 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.123921 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.123953 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.124006 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:24Z","lastTransitionTime":"2026-02-04T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.226531 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.226569 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.226580 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.226596 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.226608 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:24Z","lastTransitionTime":"2026-02-04T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.329585 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.329628 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.329640 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.329693 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.329707 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:24Z","lastTransitionTime":"2026-02-04T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.432565 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.432634 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.432651 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.432675 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.432693 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:24Z","lastTransitionTime":"2026-02-04T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.528774 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:49:43.270855595 +0000 UTC Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.534302 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.534349 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.534365 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.534385 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.534402 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:24Z","lastTransitionTime":"2026-02-04T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.637906 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.637947 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.637958 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.637972 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.637989 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:24Z","lastTransitionTime":"2026-02-04T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.740405 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.740441 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.740450 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.740463 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.740472 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:24Z","lastTransitionTime":"2026-02-04T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.843099 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.843154 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.843171 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.843230 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.843252 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:24Z","lastTransitionTime":"2026-02-04T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.945997 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.946038 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.946048 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.946060 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:24 crc kubenswrapper[4728]: I0204 11:28:24.946068 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:24Z","lastTransitionTime":"2026-02-04T11:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.048221 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.048254 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.048265 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.048280 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.048291 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:25Z","lastTransitionTime":"2026-02-04T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.154681 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.154717 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.154725 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.154739 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.154769 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:25Z","lastTransitionTime":"2026-02-04T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.257866 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.257919 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.257934 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.257954 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.257970 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:25Z","lastTransitionTime":"2026-02-04T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.360554 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.360592 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.360602 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.360618 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.360629 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:25Z","lastTransitionTime":"2026-02-04T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.463556 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.463593 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.463608 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.463624 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.463634 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:25Z","lastTransitionTime":"2026-02-04T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.528933 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:38:18.69207553 +0000 UTC Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.554016 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.554125 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:25 crc kubenswrapper[4728]: E0204 11:28:25.554182 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.554266 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.554364 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:25 crc kubenswrapper[4728]: E0204 11:28:25.554273 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:25 crc kubenswrapper[4728]: E0204 11:28:25.554516 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:25 crc kubenswrapper[4728]: E0204 11:28:25.554674 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.566455 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.566524 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.566540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.566573 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.566592 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:25Z","lastTransitionTime":"2026-02-04T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.669207 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.669499 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.669563 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.669657 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.669716 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:25Z","lastTransitionTime":"2026-02-04T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.772922 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.772990 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.773007 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.773026 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.773042 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:25Z","lastTransitionTime":"2026-02-04T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.876191 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.876260 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.876281 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.876366 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.876400 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:25Z","lastTransitionTime":"2026-02-04T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.979627 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.979720 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.979738 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.979787 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:25 crc kubenswrapper[4728]: I0204 11:28:25.979806 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:25Z","lastTransitionTime":"2026-02-04T11:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.082391 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.082426 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.082436 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.082449 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.082460 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:26Z","lastTransitionTime":"2026-02-04T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.184840 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.184879 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.184890 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.184905 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.184915 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:26Z","lastTransitionTime":"2026-02-04T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.287961 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.288018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.288031 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.288049 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.288063 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:26Z","lastTransitionTime":"2026-02-04T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.391072 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.391147 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.391161 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.391182 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.391195 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:26Z","lastTransitionTime":"2026-02-04T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.493525 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.493589 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.493609 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.493636 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.493654 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:26Z","lastTransitionTime":"2026-02-04T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.529976 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 17:02:29.066518881 +0000 UTC Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.597304 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.597407 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.597431 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.597459 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.597479 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:26Z","lastTransitionTime":"2026-02-04T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.700899 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.700954 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.700966 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.700984 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.700997 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:26Z","lastTransitionTime":"2026-02-04T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.803434 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.803485 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.803499 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.803516 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.803528 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:26Z","lastTransitionTime":"2026-02-04T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.906908 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.906980 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.906999 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.907023 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:26 crc kubenswrapper[4728]: I0204 11:28:26.907042 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:26Z","lastTransitionTime":"2026-02-04T11:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.010092 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.010136 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.010147 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.010164 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.010177 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:27Z","lastTransitionTime":"2026-02-04T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.113040 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.113089 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.113102 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.113120 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.113132 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:27Z","lastTransitionTime":"2026-02-04T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.216382 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.216426 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.216440 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.216457 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.216471 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:27Z","lastTransitionTime":"2026-02-04T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.216871 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:27 crc kubenswrapper[4728]: E0204 11:28:27.217034 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:27 crc kubenswrapper[4728]: E0204 11:28:27.217088 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs podName:8fd2519d-be03-457c-b9d6-70862115f6a9 nodeName:}" failed. No retries permitted until 2026-02-04 11:28:43.2170749 +0000 UTC m=+72.359779285 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs") pod "network-metrics-daemon-q6m9t" (UID: "8fd2519d-be03-457c-b9d6-70862115f6a9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.319108 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.319144 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.319154 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.319167 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.319177 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:27Z","lastTransitionTime":"2026-02-04T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.422048 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.422101 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.422118 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.422136 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.422149 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:27Z","lastTransitionTime":"2026-02-04T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.526017 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.526073 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.526085 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.526108 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.526121 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:27Z","lastTransitionTime":"2026-02-04T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.530431 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 00:07:16.865080922 +0000 UTC Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.552881 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.552995 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:27 crc kubenswrapper[4728]: E0204 11:28:27.553048 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.553098 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.553093 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:27 crc kubenswrapper[4728]: E0204 11:28:27.553221 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:27 crc kubenswrapper[4728]: E0204 11:28:27.553351 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:27 crc kubenswrapper[4728]: E0204 11:28:27.553456 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.628484 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.628522 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.628533 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.628549 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.628560 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:27Z","lastTransitionTime":"2026-02-04T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.731711 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.731808 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.731822 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.731841 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.731854 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:27Z","lastTransitionTime":"2026-02-04T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.835225 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.835278 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.835295 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.835327 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.835344 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:27Z","lastTransitionTime":"2026-02-04T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.938316 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.938411 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.938436 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.938472 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:27 crc kubenswrapper[4728]: I0204 11:28:27.938496 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:27Z","lastTransitionTime":"2026-02-04T11:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.041150 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.041222 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.041241 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.041265 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.041283 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.144036 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.144121 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.144143 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.144176 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.144198 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.246599 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.246663 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.246682 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.246705 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.246723 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.350325 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.350589 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.350602 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.350619 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.350631 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.452737 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.452808 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.452823 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.452841 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.452853 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: E0204 11:28:28.468881 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:28Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.473137 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.473176 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.473187 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.473204 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.473216 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: E0204 11:28:28.489037 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:28Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.493127 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.493211 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.493226 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.493245 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.493261 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: E0204 11:28:28.509253 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:28Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.512978 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.513016 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.513026 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.513041 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.513051 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: E0204 11:28:28.525832 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:28Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.529271 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.529299 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.529306 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.529319 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.529346 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.530591 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:05:13.676861513 +0000 UTC Feb 04 11:28:28 crc kubenswrapper[4728]: E0204 11:28:28.541118 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:28Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:28 crc kubenswrapper[4728]: E0204 11:28:28.541264 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.542630 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.542665 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.542675 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.542689 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.542699 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.645101 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.645186 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.645200 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.645216 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.645227 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.747244 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.747271 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.747279 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.747292 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.747299 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.850144 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.850220 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.850241 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.850268 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.850288 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.954380 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.954428 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.954439 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.954457 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:28 crc kubenswrapper[4728]: I0204 11:28:28.954469 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:28Z","lastTransitionTime":"2026-02-04T11:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.058152 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.058225 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.058244 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.058269 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.058286 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:29Z","lastTransitionTime":"2026-02-04T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.161452 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.161525 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.161540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.161565 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.161580 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:29Z","lastTransitionTime":"2026-02-04T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.263695 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.263746 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.263793 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.263807 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.263816 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:29Z","lastTransitionTime":"2026-02-04T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.366604 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.366644 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.366661 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.366676 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.366685 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:29Z","lastTransitionTime":"2026-02-04T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.469374 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.469455 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.469478 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.469509 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.469526 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:29Z","lastTransitionTime":"2026-02-04T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.530826 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:02:09.087162882 +0000 UTC Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.553418 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.553488 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:29 crc kubenswrapper[4728]: E0204 11:28:29.553550 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.553566 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.553592 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:29 crc kubenswrapper[4728]: E0204 11:28:29.553661 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:29 crc kubenswrapper[4728]: E0204 11:28:29.553842 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:29 crc kubenswrapper[4728]: E0204 11:28:29.553935 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.572421 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.572480 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.572498 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.572583 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.572602 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:29Z","lastTransitionTime":"2026-02-04T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.676033 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.676091 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.676099 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.676114 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.676124 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:29Z","lastTransitionTime":"2026-02-04T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.779472 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.779558 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.779579 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.779611 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.779628 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:29Z","lastTransitionTime":"2026-02-04T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.882975 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.883064 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.883085 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.883432 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.883708 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:29Z","lastTransitionTime":"2026-02-04T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.986392 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.986465 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.986499 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.986526 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:29 crc kubenswrapper[4728]: I0204 11:28:29.986547 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:29Z","lastTransitionTime":"2026-02-04T11:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.089411 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.089474 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.089486 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.089504 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.089517 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:30Z","lastTransitionTime":"2026-02-04T11:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.192191 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.192230 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.192238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.192252 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.192261 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:30Z","lastTransitionTime":"2026-02-04T11:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.294848 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.294896 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.294912 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.294929 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.294942 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:30Z","lastTransitionTime":"2026-02-04T11:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.397335 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.397379 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.397388 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.397403 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.397412 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:30Z","lastTransitionTime":"2026-02-04T11:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.500223 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.500271 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.500284 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.500301 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.500312 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:30Z","lastTransitionTime":"2026-02-04T11:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.531475 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:31:34.073045322 +0000 UTC Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.604271 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.604324 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.604341 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.604361 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.604377 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:30Z","lastTransitionTime":"2026-02-04T11:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.708188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.708255 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.708271 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.708295 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.708313 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:30Z","lastTransitionTime":"2026-02-04T11:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.810875 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.810939 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.810960 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.810985 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.811004 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:30Z","lastTransitionTime":"2026-02-04T11:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.914144 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.914171 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.914179 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.914191 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:30 crc kubenswrapper[4728]: I0204 11:28:30.914199 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:30Z","lastTransitionTime":"2026-02-04T11:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.017498 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.017559 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.017574 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.017592 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.017606 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:31Z","lastTransitionTime":"2026-02-04T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.120267 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.120305 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.120315 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.120341 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.120353 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:31Z","lastTransitionTime":"2026-02-04T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.223878 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.223935 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.223953 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.223977 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.223995 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:31Z","lastTransitionTime":"2026-02-04T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.326464 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.326503 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.326540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.326556 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.326571 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:31Z","lastTransitionTime":"2026-02-04T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.429897 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.429947 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.429960 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.429979 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.429997 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:31Z","lastTransitionTime":"2026-02-04T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.531670 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:43:11.613980504 +0000 UTC Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.532238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.532299 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.532322 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.532349 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.532369 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:31Z","lastTransitionTime":"2026-02-04T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.553144 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:31 crc kubenswrapper[4728]: E0204 11:28:31.553306 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.553643 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.553651 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.553706 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:31 crc kubenswrapper[4728]: E0204 11:28:31.553853 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:31 crc kubenswrapper[4728]: E0204 11:28:31.554018 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:31 crc kubenswrapper[4728]: E0204 11:28:31.554110 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.573099 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.593377 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.611023 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.635667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.635725 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.635737 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.636012 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.636167 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:31Z","lastTransitionTime":"2026-02-04T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.638202 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:21Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 11:28:21.266411 6417 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:21.266449 6417 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:21.266462 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 11:28:21.266472 6417 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 11:28:21.266492 6417 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:21.266507 6417 handler.go:208] Removed *v1.Node event handler 7\\\\nI0204 11:28:21.266497 6417 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:21.266525 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:21.266518 6417 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:21.266555 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:21.266570 6417 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:21.266616 6417 factory.go:656] Stopping watch factory\\\\nI0204 11:28:21.266639 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:21.266682 6417 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:21.266699 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.654309 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.666453 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.678870 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.693335 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.707119 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.719420 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.730221 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.737983 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.738055 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.738068 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.738103 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.738115 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:31Z","lastTransitionTime":"2026-02-04T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.741159 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.753942 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.765378 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.775936 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.787393 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.801440 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:31Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.841428 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.841482 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.841494 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.841512 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.841524 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:31Z","lastTransitionTime":"2026-02-04T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.944262 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.944312 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.944327 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.944346 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:31 crc kubenswrapper[4728]: I0204 11:28:31.944359 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:31Z","lastTransitionTime":"2026-02-04T11:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.046409 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.046437 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.046444 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.046456 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.046464 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:32Z","lastTransitionTime":"2026-02-04T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.148536 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.148578 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.148588 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.148601 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.148611 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:32Z","lastTransitionTime":"2026-02-04T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.251782 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.251822 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.251831 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.251845 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.251855 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:32Z","lastTransitionTime":"2026-02-04T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.353566 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.353632 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.353654 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.353684 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.353708 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:32Z","lastTransitionTime":"2026-02-04T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.455528 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.455836 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.455844 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.455857 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.455865 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:32Z","lastTransitionTime":"2026-02-04T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.532616 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:21:36.832733632 +0000 UTC Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.557963 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.558007 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.558018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.558031 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.558043 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:32Z","lastTransitionTime":"2026-02-04T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.661015 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.661075 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.661094 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.661118 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.661135 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:32Z","lastTransitionTime":"2026-02-04T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.763273 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.763317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.763326 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.763340 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.763351 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:32Z","lastTransitionTime":"2026-02-04T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.866299 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.866352 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.866366 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.866381 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.866405 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:32Z","lastTransitionTime":"2026-02-04T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.968540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.968584 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.968593 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.968607 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:32 crc kubenswrapper[4728]: I0204 11:28:32.968616 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:32Z","lastTransitionTime":"2026-02-04T11:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.071473 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.072327 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.072348 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.072369 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.072382 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:33Z","lastTransitionTime":"2026-02-04T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.175436 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.175502 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.175520 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.175545 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.175564 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:33Z","lastTransitionTime":"2026-02-04T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.277298 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.277345 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.277355 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.277366 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.277396 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:33Z","lastTransitionTime":"2026-02-04T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.380682 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.380728 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.380737 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.380774 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.380784 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:33Z","lastTransitionTime":"2026-02-04T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.483476 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.483515 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.483523 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.483538 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.483547 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:33Z","lastTransitionTime":"2026-02-04T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.533466 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:20:20.994598295 +0000 UTC Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.552966 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.553062 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:33 crc kubenswrapper[4728]: E0204 11:28:33.553097 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.553133 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.553173 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:33 crc kubenswrapper[4728]: E0204 11:28:33.553317 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:33 crc kubenswrapper[4728]: E0204 11:28:33.553562 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:33 crc kubenswrapper[4728]: E0204 11:28:33.553959 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.585411 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.585461 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.585472 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.585487 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.585498 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:33Z","lastTransitionTime":"2026-02-04T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.688601 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.688658 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.688672 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.688693 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.688709 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:33Z","lastTransitionTime":"2026-02-04T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.790938 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.790986 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.790995 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.791010 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.791020 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:33Z","lastTransitionTime":"2026-02-04T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.893893 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.893929 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.893937 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.893948 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.893958 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:33Z","lastTransitionTime":"2026-02-04T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.996217 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.996292 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.996313 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.996330 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:33 crc kubenswrapper[4728]: I0204 11:28:33.996342 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:33Z","lastTransitionTime":"2026-02-04T11:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.099103 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.099152 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.099164 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.099179 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.099192 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:34Z","lastTransitionTime":"2026-02-04T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.201926 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.201954 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.201962 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.201974 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.201983 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:34Z","lastTransitionTime":"2026-02-04T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.303655 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.303694 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.303727 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.303741 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.303762 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:34Z","lastTransitionTime":"2026-02-04T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.406548 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.406576 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.406583 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.406596 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.406606 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:34Z","lastTransitionTime":"2026-02-04T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.509367 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.509404 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.509415 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.509432 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.509444 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:34Z","lastTransitionTime":"2026-02-04T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.534387 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 04:19:44.28159623 +0000 UTC Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.611150 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.611179 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.611187 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.611198 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.611207 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:34Z","lastTransitionTime":"2026-02-04T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.713748 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.714320 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.714335 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.714352 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.714362 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:34Z","lastTransitionTime":"2026-02-04T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.816807 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.816851 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.816863 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.816879 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.816890 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:34Z","lastTransitionTime":"2026-02-04T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.919525 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.919591 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.919602 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.919620 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:34 crc kubenswrapper[4728]: I0204 11:28:34.919633 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:34Z","lastTransitionTime":"2026-02-04T11:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.022451 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.022527 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.022537 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.022551 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.022559 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:35Z","lastTransitionTime":"2026-02-04T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.124951 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.124992 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.125003 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.125019 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.125030 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:35Z","lastTransitionTime":"2026-02-04T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.227373 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.227453 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.227479 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.227694 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.227715 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:35Z","lastTransitionTime":"2026-02-04T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.330901 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.330966 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.330986 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.331010 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.331027 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:35Z","lastTransitionTime":"2026-02-04T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.433695 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.433729 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.433743 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.433787 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.433798 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:35Z","lastTransitionTime":"2026-02-04T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.534676 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:23:46.144131766 +0000 UTC Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.536062 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.536143 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.536171 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.536202 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.536244 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:35Z","lastTransitionTime":"2026-02-04T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.553536 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.553612 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.553570 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.553826 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:35 crc kubenswrapper[4728]: E0204 11:28:35.553812 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:35 crc kubenswrapper[4728]: E0204 11:28:35.553942 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:35 crc kubenswrapper[4728]: E0204 11:28:35.554852 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:35 crc kubenswrapper[4728]: E0204 11:28:35.554956 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.555197 4728 scope.go:117] "RemoveContainer" containerID="e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275" Feb 04 11:28:35 crc kubenswrapper[4728]: E0204 11:28:35.555365 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.638966 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.639030 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.639047 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.639069 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.639081 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:35Z","lastTransitionTime":"2026-02-04T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.742201 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.742242 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.742257 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.742276 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.742292 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:35Z","lastTransitionTime":"2026-02-04T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.844940 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.844998 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.845009 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.845029 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.845042 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:35Z","lastTransitionTime":"2026-02-04T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.948206 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.948272 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.948286 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.948309 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:35 crc kubenswrapper[4728]: I0204 11:28:35.948321 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:35Z","lastTransitionTime":"2026-02-04T11:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.051296 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.051359 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.051372 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.051393 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.051409 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:36Z","lastTransitionTime":"2026-02-04T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.153635 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.153669 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.153677 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.153691 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.153700 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:36Z","lastTransitionTime":"2026-02-04T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.256239 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.256305 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.256317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.256333 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.256343 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:36Z","lastTransitionTime":"2026-02-04T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.358421 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.358460 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.358470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.358484 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.358495 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:36Z","lastTransitionTime":"2026-02-04T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.461039 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.461114 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.461136 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.461154 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.461166 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:36Z","lastTransitionTime":"2026-02-04T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.534864 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:34:21.299667446 +0000 UTC Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.563838 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.563879 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.563891 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.563908 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.563919 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:36Z","lastTransitionTime":"2026-02-04T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.666173 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.666203 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.666212 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.666226 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.666235 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:36Z","lastTransitionTime":"2026-02-04T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.768855 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.768897 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.768911 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.768928 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.768942 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:36Z","lastTransitionTime":"2026-02-04T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.871665 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.871716 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.871730 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.872033 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.872074 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:36Z","lastTransitionTime":"2026-02-04T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.975588 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.975637 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.975654 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.975679 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:36 crc kubenswrapper[4728]: I0204 11:28:36.975695 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:36Z","lastTransitionTime":"2026-02-04T11:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.079981 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.080056 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.080068 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.080085 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.080096 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:37Z","lastTransitionTime":"2026-02-04T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.182198 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.182239 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.182250 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.182265 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.182276 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:37Z","lastTransitionTime":"2026-02-04T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.284895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.284928 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.284936 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.284948 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.284957 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:37Z","lastTransitionTime":"2026-02-04T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.387545 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.387599 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.387656 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.387687 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.387708 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:37Z","lastTransitionTime":"2026-02-04T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.490193 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.490227 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.490238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.490252 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.490262 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:37Z","lastTransitionTime":"2026-02-04T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.535923 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:51:07.791637056 +0000 UTC Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.552715 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.552736 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.552764 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:37 crc kubenswrapper[4728]: E0204 11:28:37.552835 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.552859 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:37 crc kubenswrapper[4728]: E0204 11:28:37.552910 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:37 crc kubenswrapper[4728]: E0204 11:28:37.553059 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:37 crc kubenswrapper[4728]: E0204 11:28:37.553731 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.592663 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.592696 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.592705 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.592721 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.592733 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:37Z","lastTransitionTime":"2026-02-04T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.695140 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.695200 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.695212 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.695231 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.695280 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:37Z","lastTransitionTime":"2026-02-04T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.797640 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.797687 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.797696 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.797715 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.797724 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:37Z","lastTransitionTime":"2026-02-04T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.899619 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.899676 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.899684 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.899695 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:37 crc kubenswrapper[4728]: I0204 11:28:37.899704 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:37Z","lastTransitionTime":"2026-02-04T11:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.002011 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.002053 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.002064 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.002078 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.002087 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.104446 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.104488 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.104522 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.104540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.104551 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.207147 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.207184 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.207198 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.207213 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.207226 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.309717 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.309799 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.309818 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.309837 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.309851 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.412484 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.412529 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.412540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.412555 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.412566 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.514903 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.514949 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.514960 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.514976 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.514991 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.536059 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:49:58.046103728 +0000 UTC Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.546276 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.546315 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.546324 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.546339 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.546348 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: E0204 11:28:38.560699 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:38Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.564340 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.564376 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.564390 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.564410 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.564423 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: E0204 11:28:38.575855 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:38Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.578654 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.578693 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.578701 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.578717 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.578726 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: E0204 11:28:38.591888 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:38Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.595913 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.595956 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.595967 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.595982 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.595995 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: E0204 11:28:38.608209 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:38Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.611856 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.611912 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.611924 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.611942 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.611954 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: E0204 11:28:38.623359 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:38Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:38 crc kubenswrapper[4728]: E0204 11:28:38.623574 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.625050 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.625089 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.625099 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.625114 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.625124 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.726853 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.726882 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.726892 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.726908 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.726919 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.829575 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.829637 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.829649 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.829664 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.829675 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.932121 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.932184 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.932197 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.932231 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:38 crc kubenswrapper[4728]: I0204 11:28:38.932243 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:38Z","lastTransitionTime":"2026-02-04T11:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.035348 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.035396 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.035409 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.035427 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.035440 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:39Z","lastTransitionTime":"2026-02-04T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.137846 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.137895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.137911 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.137930 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.137942 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:39Z","lastTransitionTime":"2026-02-04T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.240618 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.240654 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.240663 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.240678 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.240688 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:39Z","lastTransitionTime":"2026-02-04T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.342355 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.342411 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.342428 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.342449 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.342465 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:39Z","lastTransitionTime":"2026-02-04T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.445369 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.445430 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.445446 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.445471 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.445489 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:39Z","lastTransitionTime":"2026-02-04T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.536903 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:23:30.404462857 +0000 UTC Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.547892 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.547934 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.547946 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.547961 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.547973 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:39Z","lastTransitionTime":"2026-02-04T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.553167 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.553209 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.553235 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.553263 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:39 crc kubenswrapper[4728]: E0204 11:28:39.553279 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:39 crc kubenswrapper[4728]: E0204 11:28:39.553359 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:39 crc kubenswrapper[4728]: E0204 11:28:39.553421 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:39 crc kubenswrapper[4728]: E0204 11:28:39.553541 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.651264 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.651309 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.651317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.651330 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.651339 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:39Z","lastTransitionTime":"2026-02-04T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.754572 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.754618 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.754633 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.754651 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.754663 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:39Z","lastTransitionTime":"2026-02-04T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.857192 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.857243 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.857256 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.857275 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.857288 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:39Z","lastTransitionTime":"2026-02-04T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.960365 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.960458 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.960471 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.960492 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:39 crc kubenswrapper[4728]: I0204 11:28:39.960505 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:39Z","lastTransitionTime":"2026-02-04T11:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.064428 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.064484 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.064497 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.064514 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.064526 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:40Z","lastTransitionTime":"2026-02-04T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.167109 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.167183 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.167205 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.167230 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.167244 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:40Z","lastTransitionTime":"2026-02-04T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.270214 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.270283 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.270293 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.270308 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.270318 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:40Z","lastTransitionTime":"2026-02-04T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.372688 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.372748 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.372806 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.372870 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.372890 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:40Z","lastTransitionTime":"2026-02-04T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.475985 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.476011 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.476020 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.476033 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.476042 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:40Z","lastTransitionTime":"2026-02-04T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.537427 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:40:44.44647216 +0000 UTC Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.578369 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.578415 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.578426 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.578443 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.578458 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:40Z","lastTransitionTime":"2026-02-04T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.681077 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.681409 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.681538 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.681628 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.681721 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:40Z","lastTransitionTime":"2026-02-04T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.784014 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.784105 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.784122 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.784152 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.784169 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:40Z","lastTransitionTime":"2026-02-04T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.897342 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.897393 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.897403 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.897420 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.897432 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:40Z","lastTransitionTime":"2026-02-04T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.999880 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.999936 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.999948 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:40 crc kubenswrapper[4728]: I0204 11:28:40.999967 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:40.999978 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:40Z","lastTransitionTime":"2026-02-04T11:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.102965 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.103013 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.103027 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.103077 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.103089 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:41Z","lastTransitionTime":"2026-02-04T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.207185 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.207246 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.207258 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.207279 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.207290 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:41Z","lastTransitionTime":"2026-02-04T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.309597 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.309628 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.309637 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.309651 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.309662 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:41Z","lastTransitionTime":"2026-02-04T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.412787 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.413044 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.413143 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.413232 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.413313 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:41Z","lastTransitionTime":"2026-02-04T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.515447 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.515504 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.515531 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.515560 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.515576 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:41Z","lastTransitionTime":"2026-02-04T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.537628 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 03:53:30.860237726 +0000 UTC Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.552870 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:41 crc kubenswrapper[4728]: E0204 11:28:41.552993 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.553058 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:41 crc kubenswrapper[4728]: E0204 11:28:41.553112 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.553435 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:41 crc kubenswrapper[4728]: E0204 11:28:41.553504 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.553657 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:41 crc kubenswrapper[4728]: E0204 11:28:41.553922 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.570658 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.584897 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.597880 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.616082 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:21Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 11:28:21.266411 6417 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:21.266449 6417 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:21.266462 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 11:28:21.266472 6417 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 11:28:21.266492 6417 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:21.266507 6417 handler.go:208] Removed *v1.Node event handler 7\\\\nI0204 11:28:21.266497 6417 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:21.266525 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:21.266518 6417 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:21.266555 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:21.266570 6417 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:21.266616 6417 factory.go:656] Stopping watch factory\\\\nI0204 11:28:21.266639 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:21.266682 6417 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:21.266699 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.617836 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.617907 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.617928 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.617958 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.617973 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:41Z","lastTransitionTime":"2026-02-04T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.628249 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.638624 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.649052 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.666565 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.677942 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.689732 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.700719 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.713679 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.720698 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.720882 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.720962 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.721050 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.721259 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:41Z","lastTransitionTime":"2026-02-04T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.728433 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.743383 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.756658 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.771151 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.785662 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:41Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.823352 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.823406 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.823417 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.823437 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.823451 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:41Z","lastTransitionTime":"2026-02-04T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.925876 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.925922 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.925934 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.925952 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:41 crc kubenswrapper[4728]: I0204 11:28:41.925964 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:41Z","lastTransitionTime":"2026-02-04T11:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.028050 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.028098 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.028108 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.028127 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.028136 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:42Z","lastTransitionTime":"2026-02-04T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.131381 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.131458 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.131477 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.131505 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.131522 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:42Z","lastTransitionTime":"2026-02-04T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.235049 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.235098 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.235107 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.235125 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.235135 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:42Z","lastTransitionTime":"2026-02-04T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.338412 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.338471 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.338482 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.338498 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.338507 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:42Z","lastTransitionTime":"2026-02-04T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.441707 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.441824 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.441848 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.441881 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.441901 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:42Z","lastTransitionTime":"2026-02-04T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.537986 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:04:25.615405475 +0000 UTC Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.544224 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.544339 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.544407 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.544551 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.544666 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:42Z","lastTransitionTime":"2026-02-04T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.648117 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.648427 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.648505 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.648598 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.648686 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:42Z","lastTransitionTime":"2026-02-04T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.750968 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.751237 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.751313 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.751402 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.751491 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:42Z","lastTransitionTime":"2026-02-04T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.853914 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.854263 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.854485 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.854682 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.854944 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:42Z","lastTransitionTime":"2026-02-04T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.957295 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.957641 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.957811 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.957973 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:42 crc kubenswrapper[4728]: I0204 11:28:42.958123 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:42Z","lastTransitionTime":"2026-02-04T11:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.060945 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.061006 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.061021 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.061039 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.061053 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:43Z","lastTransitionTime":"2026-02-04T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.163140 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.163191 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.163206 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.163230 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.163245 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:43Z","lastTransitionTime":"2026-02-04T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.265432 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.265508 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.265536 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.265581 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.265602 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:43Z","lastTransitionTime":"2026-02-04T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.286468 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:43 crc kubenswrapper[4728]: E0204 11:28:43.286729 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:43 crc kubenswrapper[4728]: E0204 11:28:43.286849 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs podName:8fd2519d-be03-457c-b9d6-70862115f6a9 nodeName:}" failed. No retries permitted until 2026-02-04 11:29:15.286823362 +0000 UTC m=+104.429527827 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs") pod "network-metrics-daemon-q6m9t" (UID: "8fd2519d-be03-457c-b9d6-70862115f6a9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.368357 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.368638 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.368780 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.368894 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.369019 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:43Z","lastTransitionTime":"2026-02-04T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.470995 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.471069 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.471081 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.471102 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.471113 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:43Z","lastTransitionTime":"2026-02-04T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.538514 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:16:18.276739429 +0000 UTC Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.552980 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.553021 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.553074 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:43 crc kubenswrapper[4728]: E0204 11:28:43.553136 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:43 crc kubenswrapper[4728]: E0204 11:28:43.553285 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:43 crc kubenswrapper[4728]: E0204 11:28:43.553565 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.553006 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:43 crc kubenswrapper[4728]: E0204 11:28:43.553996 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.573591 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.573637 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.573647 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.573662 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.573672 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:43Z","lastTransitionTime":"2026-02-04T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.676342 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.676380 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.676393 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.676411 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.676423 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:43Z","lastTransitionTime":"2026-02-04T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.778983 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.779045 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.779057 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.779075 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.779088 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:43Z","lastTransitionTime":"2026-02-04T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.882126 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.882197 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.882214 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.882234 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.882246 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:43Z","lastTransitionTime":"2026-02-04T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.985042 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.985100 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.985115 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.985134 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:43 crc kubenswrapper[4728]: I0204 11:28:43.985147 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:43Z","lastTransitionTime":"2026-02-04T11:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.088125 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.088544 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.088730 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.088976 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.089166 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:44Z","lastTransitionTime":"2026-02-04T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.191940 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.192004 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.192015 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.192037 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.192049 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:44Z","lastTransitionTime":"2026-02-04T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.294987 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.295078 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.295106 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.295139 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.295162 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:44Z","lastTransitionTime":"2026-02-04T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.397541 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.397624 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.397638 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.397665 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.397682 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:44Z","lastTransitionTime":"2026-02-04T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.499906 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.499982 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.500013 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.500033 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.500045 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:44Z","lastTransitionTime":"2026-02-04T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.539987 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:54:00.747435814 +0000 UTC Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.601913 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.602222 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.602451 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.602730 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.602989 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:44Z","lastTransitionTime":"2026-02-04T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.705927 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.706017 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.706044 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.706078 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.706102 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:44Z","lastTransitionTime":"2026-02-04T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.808559 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.808845 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.808909 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.808987 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.809056 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:44Z","lastTransitionTime":"2026-02-04T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.912005 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.912084 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.912096 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.912120 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:44 crc kubenswrapper[4728]: I0204 11:28:44.912132 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:44Z","lastTransitionTime":"2026-02-04T11:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.014894 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.014987 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.015015 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.015048 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.015071 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:45Z","lastTransitionTime":"2026-02-04T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.120944 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.121012 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.121031 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.121058 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.121083 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:45Z","lastTransitionTime":"2026-02-04T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.224721 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.225153 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.225572 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.225689 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.225791 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:45Z","lastTransitionTime":"2026-02-04T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.328720 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.328795 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.328811 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.328831 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.328842 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:45Z","lastTransitionTime":"2026-02-04T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.431439 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.431773 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.431843 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.431936 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.432002 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:45Z","lastTransitionTime":"2026-02-04T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.534951 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.535254 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.535401 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.535504 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.535599 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:45Z","lastTransitionTime":"2026-02-04T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.540163 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:50:04.419816025 +0000 UTC Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.552964 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.553000 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.553124 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:45 crc kubenswrapper[4728]: E0204 11:28:45.553272 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:45 crc kubenswrapper[4728]: E0204 11:28:45.553392 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:45 crc kubenswrapper[4728]: E0204 11:28:45.553569 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.553796 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:45 crc kubenswrapper[4728]: E0204 11:28:45.553909 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.577136 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dc6rd_3dbc56be-abfc-4180-870e-f4c19bd09f4b/kube-multus/0.log" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.577192 4728 generic.go:334] "Generic (PLEG): container finished" podID="3dbc56be-abfc-4180-870e-f4c19bd09f4b" containerID="cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d" exitCode=1 Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.577233 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dc6rd" event={"ID":"3dbc56be-abfc-4180-870e-f4c19bd09f4b","Type":"ContainerDied","Data":"cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d"} Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.577675 4728 scope.go:117] "RemoveContainer" containerID="cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.594731 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.609777 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.624994 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.637585 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.637623 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.637632 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.637652 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.637663 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:45Z","lastTransitionTime":"2026-02-04T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.640937 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.652395 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:44Z\\\",\\\"message\\\":\\\"2026-02-04T11:27:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033\\\\n2026-02-04T11:27:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033 to /host/opt/cni/bin/\\\\n2026-02-04T11:27:59Z [verbose] multus-daemon started\\\\n2026-02-04T11:27:59Z [verbose] Readiness Indicator file check\\\\n2026-02-04T11:28:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.673058 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:21Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 11:28:21.266411 6417 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:21.266449 6417 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:21.266462 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 11:28:21.266472 6417 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 11:28:21.266492 6417 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:21.266507 6417 handler.go:208] Removed *v1.Node event handler 7\\\\nI0204 11:28:21.266497 6417 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:21.266525 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:21.266518 6417 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:21.266555 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:21.266570 6417 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:21.266616 6417 factory.go:656] Stopping watch factory\\\\nI0204 11:28:21.266639 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:21.266682 6417 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:21.266699 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.687644 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.706775 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.724368 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.736896 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.740547 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.740622 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.740632 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.740651 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.740663 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:45Z","lastTransitionTime":"2026-02-04T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.750374 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.759278 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.770941 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.783100 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.795400 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.805515 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.815381 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:45Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.843392 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.843440 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.843454 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.843471 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.843485 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:45Z","lastTransitionTime":"2026-02-04T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.946595 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.946637 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.946647 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.946663 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:45 crc kubenswrapper[4728]: I0204 11:28:45.946673 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:45Z","lastTransitionTime":"2026-02-04T11:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.050854 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.050928 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.050967 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.051001 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.051030 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:46Z","lastTransitionTime":"2026-02-04T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.154283 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.154340 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.154349 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.154368 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.154382 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:46Z","lastTransitionTime":"2026-02-04T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.257729 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.257802 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.257816 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.257838 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.257850 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:46Z","lastTransitionTime":"2026-02-04T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.360734 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.360819 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.360837 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.361323 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.361634 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:46Z","lastTransitionTime":"2026-02-04T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.464811 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.464965 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.464993 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.465023 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.465047 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:46Z","lastTransitionTime":"2026-02-04T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.541158 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:30:10.084183497 +0000 UTC Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.570386 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.570452 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.570479 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.570504 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.570524 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:46Z","lastTransitionTime":"2026-02-04T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.582596 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dc6rd_3dbc56be-abfc-4180-870e-f4c19bd09f4b/kube-multus/0.log" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.582703 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dc6rd" event={"ID":"3dbc56be-abfc-4180-870e-f4c19bd09f4b","Type":"ContainerStarted","Data":"67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca"} Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.595629 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.611047 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.627200 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:44Z\\\",\\\"message\\\":\\\"2026-02-04T11:27:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033\\\\n2026-02-04T11:27:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033 to /host/opt/cni/bin/\\\\n2026-02-04T11:27:59Z [verbose] multus-daemon started\\\\n2026-02-04T11:27:59Z [verbose] Readiness Indicator file check\\\\n2026-02-04T11:28:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.642666 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.655829 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.667234 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.676498 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.676531 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.676540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.676554 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.676562 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:46Z","lastTransitionTime":"2026-02-04T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.680611 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.697898 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.717771 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:21Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 11:28:21.266411 6417 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:21.266449 6417 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:21.266462 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 11:28:21.266472 6417 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 11:28:21.266492 6417 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:21.266507 6417 handler.go:208] Removed *v1.Node event handler 7\\\\nI0204 11:28:21.266497 6417 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:21.266525 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:21.266518 6417 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:21.266555 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:21.266570 6417 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:21.266616 6417 factory.go:656] Stopping watch factory\\\\nI0204 11:28:21.266639 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:21.266682 6417 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:21.266699 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.732587 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.742708 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.756668 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.769022 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.778960 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.779202 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.779301 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.779408 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.779491 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:46Z","lastTransitionTime":"2026-02-04T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.783201 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.793657 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.804359 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.815547 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:46Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.882381 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.882443 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.882459 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.882478 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.882494 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:46Z","lastTransitionTime":"2026-02-04T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.985009 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.985040 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.985048 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.985063 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:46 crc kubenswrapper[4728]: I0204 11:28:46.985073 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:46Z","lastTransitionTime":"2026-02-04T11:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.087672 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.087729 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.087741 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.087783 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.087795 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:47Z","lastTransitionTime":"2026-02-04T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.191205 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.191289 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.191313 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.191347 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.191367 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:47Z","lastTransitionTime":"2026-02-04T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.293640 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.293680 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.293688 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.293703 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.293713 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:47Z","lastTransitionTime":"2026-02-04T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.397265 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.397341 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.397352 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.397368 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.397377 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:47Z","lastTransitionTime":"2026-02-04T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.500208 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.500263 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.500278 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.500329 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.500345 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:47Z","lastTransitionTime":"2026-02-04T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.542118 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 18:56:44.410222751 +0000 UTC Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.553607 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.553668 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.553711 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.553795 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:47 crc kubenswrapper[4728]: E0204 11:28:47.553950 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:47 crc kubenswrapper[4728]: E0204 11:28:47.554145 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:47 crc kubenswrapper[4728]: E0204 11:28:47.554271 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:47 crc kubenswrapper[4728]: E0204 11:28:47.554376 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.603155 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.603202 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.603213 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.603232 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.603245 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:47Z","lastTransitionTime":"2026-02-04T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.705722 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.705793 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.705803 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.705819 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.705835 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:47Z","lastTransitionTime":"2026-02-04T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.808576 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.808614 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.808623 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.808642 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.808652 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:47Z","lastTransitionTime":"2026-02-04T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.911539 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.911570 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.911578 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.911591 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:47 crc kubenswrapper[4728]: I0204 11:28:47.911602 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:47Z","lastTransitionTime":"2026-02-04T11:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.013678 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.013727 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.013745 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.013784 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.013803 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.117005 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.117072 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.117096 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.117131 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.117154 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.221028 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.221137 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.221169 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.221213 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.221259 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.324806 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.324856 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.324868 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.324888 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.324902 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.427931 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.428036 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.428068 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.428100 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.428122 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.530972 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.531028 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.531041 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.531062 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.531077 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.542384 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 22:25:17.628563942 +0000 UTC Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.633224 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.633574 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.633587 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.633604 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.633617 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.737722 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.737841 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.737858 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.737879 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.737896 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.840935 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.840984 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.840993 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.841014 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.841026 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.921330 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.921509 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.921533 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.921558 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.921573 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: E0204 11:28:48.936278 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:48Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.940653 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.940715 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.940727 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.940762 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.940776 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: E0204 11:28:48.955065 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:48Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.960281 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.960326 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.960346 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.960366 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.960378 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: E0204 11:28:48.973252 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:48Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.976605 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.976654 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.976667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.976689 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.976702 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:48 crc kubenswrapper[4728]: E0204 11:28:48.991344 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:48Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.994144 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.994197 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.994211 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.994231 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:48 crc kubenswrapper[4728]: I0204 11:28:48.994244 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:48Z","lastTransitionTime":"2026-02-04T11:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:49 crc kubenswrapper[4728]: E0204 11:28:49.007075 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:49Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:49 crc kubenswrapper[4728]: E0204 11:28:49.007238 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.008706 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.008810 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.008833 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.008860 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.008876 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:49Z","lastTransitionTime":"2026-02-04T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.111317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.111382 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.111406 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.111427 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.111446 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:49Z","lastTransitionTime":"2026-02-04T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.214255 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.214311 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.214324 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.214343 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.214359 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:49Z","lastTransitionTime":"2026-02-04T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.317010 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.317059 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.317069 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.317087 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.317098 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:49Z","lastTransitionTime":"2026-02-04T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.420827 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.420893 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.420911 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.420940 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.420959 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:49Z","lastTransitionTime":"2026-02-04T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.523742 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.523822 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.523834 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.523852 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.523862 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:49Z","lastTransitionTime":"2026-02-04T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.543451 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:07:31.403469779 +0000 UTC Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.553005 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.553103 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.553192 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:49 crc kubenswrapper[4728]: E0204 11:28:49.553185 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.553383 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:49 crc kubenswrapper[4728]: E0204 11:28:49.553387 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:49 crc kubenswrapper[4728]: E0204 11:28:49.553495 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:49 crc kubenswrapper[4728]: E0204 11:28:49.553561 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.554251 4728 scope.go:117] "RemoveContainer" containerID="e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.626632 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.626685 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.626700 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.626721 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.626739 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:49Z","lastTransitionTime":"2026-02-04T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.730312 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.730401 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.730417 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.730439 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.730451 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:49Z","lastTransitionTime":"2026-02-04T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.832685 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.832779 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.832798 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.832822 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.832838 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:49Z","lastTransitionTime":"2026-02-04T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.935587 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.935635 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.935645 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.935662 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:49 crc kubenswrapper[4728]: I0204 11:28:49.935675 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:49Z","lastTransitionTime":"2026-02-04T11:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.038536 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.038674 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.038698 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.038734 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.038784 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:50Z","lastTransitionTime":"2026-02-04T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.141192 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.141231 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.141240 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.141254 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.141263 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:50Z","lastTransitionTime":"2026-02-04T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.243723 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.243792 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.243804 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.243824 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.243838 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:50Z","lastTransitionTime":"2026-02-04T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.346324 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.346368 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.346381 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.346399 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.346410 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:50Z","lastTransitionTime":"2026-02-04T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.448437 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.448493 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.448505 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.448525 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.448539 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:50Z","lastTransitionTime":"2026-02-04T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.544535 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:57:15.205374701 +0000 UTC Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.551187 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.551280 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.551293 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.551310 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.551321 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:50Z","lastTransitionTime":"2026-02-04T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.598239 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/2.log" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.600634 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3"} Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.601609 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.615581 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.626442 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.642400 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:21Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 11:28:21.266411 6417 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:21.266449 6417 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:21.266462 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 11:28:21.266472 6417 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 11:28:21.266492 6417 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:21.266507 6417 handler.go:208] Removed *v1.Node event handler 7\\\\nI0204 11:28:21.266497 6417 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:21.266525 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:21.266518 6417 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:21.266555 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:21.266570 6417 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:21.266616 6417 factory.go:656] Stopping watch factory\\\\nI0204 11:28:21.266639 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:21.266682 6417 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:21.266699 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.653652 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.653703 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.653715 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.653733 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.653744 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:50Z","lastTransitionTime":"2026-02-04T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.657355 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.669971 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.679173 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.689643 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.702515 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.713060 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.724287 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.735924 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.747393 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.756377 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.756433 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.756450 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.756469 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.756482 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:50Z","lastTransitionTime":"2026-02-04T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.758977 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.770517 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:44Z\\\",\\\"message\\\":\\\"2026-02-04T11:27:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033\\\\n2026-02-04T11:27:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033 to /host/opt/cni/bin/\\\\n2026-02-04T11:27:59Z [verbose] multus-daemon started\\\\n2026-02-04T11:27:59Z [verbose] Readiness Indicator file check\\\\n2026-02-04T11:28:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.784006 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.795734 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.807832 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:50Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.859026 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.859073 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.859082 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.859098 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.859106 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:50Z","lastTransitionTime":"2026-02-04T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.961711 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.961784 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.961795 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.961815 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:50 crc kubenswrapper[4728]: I0204 11:28:50.961825 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:50Z","lastTransitionTime":"2026-02-04T11:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.065428 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.065506 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.065529 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.065562 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.065586 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:51Z","lastTransitionTime":"2026-02-04T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.168584 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.168639 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.168651 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.168668 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.169085 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:51Z","lastTransitionTime":"2026-02-04T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.272716 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.272821 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.272838 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.272860 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.272874 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:51Z","lastTransitionTime":"2026-02-04T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.375218 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.375268 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.375282 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.375302 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.375314 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:51Z","lastTransitionTime":"2026-02-04T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.477587 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.477640 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.477649 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.477667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.477679 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:51Z","lastTransitionTime":"2026-02-04T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.545443 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:39:54.011899129 +0000 UTC Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.552851 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.552883 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:51 crc kubenswrapper[4728]: E0204 11:28:51.553015 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.553044 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.553031 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:51 crc kubenswrapper[4728]: E0204 11:28:51.553217 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:51 crc kubenswrapper[4728]: E0204 11:28:51.553259 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:51 crc kubenswrapper[4728]: E0204 11:28:51.553314 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.568354 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.580702 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.580780 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.580792 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.580808 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.580819 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:51Z","lastTransitionTime":"2026-02-04T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.582072 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.597993 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.605976 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/3.log" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.606722 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/2.log" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.610001 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" exitCode=1 Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.610059 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3"} Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.610171 4728 scope.go:117] "RemoveContainer" containerID="e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.611059 4728 scope.go:117] "RemoveContainer" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" Feb 04 11:28:51 crc kubenswrapper[4728]: E0204 11:28:51.611346 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.612964 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.637080 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.649062 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.661036 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.673517 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:44Z\\\",\\\"message\\\":\\\"2026-02-04T11:27:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033\\\\n2026-02-04T11:27:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033 to /host/opt/cni/bin/\\\\n2026-02-04T11:27:59Z [verbose] multus-daemon started\\\\n2026-02-04T11:27:59Z [verbose] Readiness Indicator file check\\\\n2026-02-04T11:28:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.684220 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.684270 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.684281 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.684299 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.684309 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:51Z","lastTransitionTime":"2026-02-04T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.690845 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.703606 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.718123 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.734392 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.746766 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.764461 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:21Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 11:28:21.266411 6417 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:21.266449 6417 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:21.266462 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 11:28:21.266472 6417 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 11:28:21.266492 6417 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:21.266507 6417 handler.go:208] Removed *v1.Node event handler 7\\\\nI0204 11:28:21.266497 6417 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:21.266525 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:21.266518 6417 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:21.266555 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:21.266570 6417 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:21.266616 6417 factory.go:656] Stopping watch factory\\\\nI0204 11:28:21.266639 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:21.266682 6417 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:21.266699 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.778082 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.788327 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.788368 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.788376 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.788391 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.788403 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:51Z","lastTransitionTime":"2026-02-04T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.789229 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.800419 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.813692 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.824825 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.837179 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.853501 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.867421 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.878656 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.890972 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.900079 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.900113 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.900126 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.900148 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.900161 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:51Z","lastTransitionTime":"2026-02-04T11:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.903277 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.914499 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.933242 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.958622 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:44Z\\\",\\\"message\\\":\\\"2026-02-04T11:27:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033\\\\n2026-02-04T11:27:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033 to /host/opt/cni/bin/\\\\n2026-02-04T11:27:59Z [verbose] multus-daemon started\\\\n2026-02-04T11:27:59Z [verbose] Readiness Indicator file check\\\\n2026-02-04T11:28:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.973377 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.985351 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:51 crc kubenswrapper[4728]: I0204 11:28:51.996174 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:51Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.002765 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.003053 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.003136 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.003218 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.003327 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:52Z","lastTransitionTime":"2026-02-04T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.009521 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.021407 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.039061 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2d6c9455a23eb55e7cd28296fd79cb00613b81c6107196cd3111fef63eeb275\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:21Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0204 11:28:21.266411 6417 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0204 11:28:21.266449 6417 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0204 11:28:21.266462 6417 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0204 11:28:21.266472 6417 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0204 11:28:21.266492 6417 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0204 11:28:21.266507 6417 handler.go:208] Removed *v1.Node event handler 7\\\\nI0204 11:28:21.266497 6417 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0204 11:28:21.266525 6417 handler.go:208] Removed *v1.Node event handler 2\\\\nI0204 11:28:21.266518 6417 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0204 11:28:21.266555 6417 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0204 11:28:21.266570 6417 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0204 11:28:21.266616 6417 factory.go:656] Stopping watch factory\\\\nI0204 11:28:21.266639 6417 ovnkube.go:599] Stopped ovnkube\\\\nI0204 11:28:21.266682 6417 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0204 11:28:21.266699 6417 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0204 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:50Z\\\",\\\"message\\\":\\\"ned-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006b680b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-controller-manager-operator,},ClusterIP:10.217.5.58,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0204 11:28:50.561448 6824 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0204 11:28:50.561467 6824 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.106422 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.106667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.106772 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.106888 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.106985 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:52Z","lastTransitionTime":"2026-02-04T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.210346 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.210401 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.210417 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.210436 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.210448 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:52Z","lastTransitionTime":"2026-02-04T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.312725 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.312804 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.312815 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.312832 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.312843 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:52Z","lastTransitionTime":"2026-02-04T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.414744 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.414814 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.414826 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.414844 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.414854 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:52Z","lastTransitionTime":"2026-02-04T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.517329 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.517364 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.517372 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.517388 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.517398 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:52Z","lastTransitionTime":"2026-02-04T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.545913 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:46:32.79931107 +0000 UTC Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.614827 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/3.log" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.617891 4728 scope.go:117] "RemoveContainer" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" Feb 04 11:28:52 crc kubenswrapper[4728]: E0204 11:28:52.618031 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.619417 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.619444 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.619455 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.619472 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.619483 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:52Z","lastTransitionTime":"2026-02-04T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.631859 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.648373 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:44Z\\\",\\\"message\\\":\\\"2026-02-04T11:27:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033\\\\n2026-02-04T11:27:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033 to /host/opt/cni/bin/\\\\n2026-02-04T11:27:59Z [verbose] multus-daemon started\\\\n2026-02-04T11:27:59Z [verbose] Readiness Indicator file check\\\\n2026-02-04T11:28:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.663372 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.675868 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.688274 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.701987 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.715273 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.721863 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.721912 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.721925 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.721945 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.721960 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:52Z","lastTransitionTime":"2026-02-04T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.735469 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:50Z\\\",\\\"message\\\":\\\"ned-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006b680b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-controller-manager-operator,},ClusterIP:10.217.5.58,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0204 11:28:50.561448 6824 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0204 11:28:50.561467 6824 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.750670 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.763649 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.775964 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.791543 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.806376 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.817872 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.824061 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.824142 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.824154 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.824171 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.824182 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:52Z","lastTransitionTime":"2026-02-04T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.829404 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.842379 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.856403 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:52Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.926579 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.926621 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.926632 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.926647 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:52 crc kubenswrapper[4728]: I0204 11:28:52.926658 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:52Z","lastTransitionTime":"2026-02-04T11:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.029559 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.029826 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.029907 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.029974 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.030127 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:53Z","lastTransitionTime":"2026-02-04T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.132489 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.132537 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.132549 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.132568 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.132578 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:53Z","lastTransitionTime":"2026-02-04T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.235279 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.235358 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.235371 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.235392 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.235404 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:53Z","lastTransitionTime":"2026-02-04T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.338183 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.338235 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.338245 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.338262 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.338272 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:53Z","lastTransitionTime":"2026-02-04T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.441404 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.441712 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.441836 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.441934 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.442019 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:53Z","lastTransitionTime":"2026-02-04T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.544439 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.544521 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.544544 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.544577 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.544624 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:53Z","lastTransitionTime":"2026-02-04T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.546569 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:36:04.650981383 +0000 UTC Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.553042 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.553158 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.553346 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.553488 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:53 crc kubenswrapper[4728]: E0204 11:28:53.553508 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:53 crc kubenswrapper[4728]: E0204 11:28:53.553603 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:53 crc kubenswrapper[4728]: E0204 11:28:53.553698 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:53 crc kubenswrapper[4728]: E0204 11:28:53.553891 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.647435 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.647505 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.647520 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.647542 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.647555 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:53Z","lastTransitionTime":"2026-02-04T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.750216 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.750288 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.750298 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.750314 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.750326 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:53Z","lastTransitionTime":"2026-02-04T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.853367 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.853433 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.853453 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.853479 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.853499 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:53Z","lastTransitionTime":"2026-02-04T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.956128 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.956167 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.956178 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.956195 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:53 crc kubenswrapper[4728]: I0204 11:28:53.956208 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:53Z","lastTransitionTime":"2026-02-04T11:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.059098 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.059137 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.059145 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.059160 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.059170 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:54Z","lastTransitionTime":"2026-02-04T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.161792 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.161856 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.161868 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.161884 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.161895 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:54Z","lastTransitionTime":"2026-02-04T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.264685 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.264786 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.264804 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.264829 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.264849 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:54Z","lastTransitionTime":"2026-02-04T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.368043 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.368357 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.368372 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.368390 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.368401 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:54Z","lastTransitionTime":"2026-02-04T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.471107 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.471187 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.471213 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.471246 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.471268 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:54Z","lastTransitionTime":"2026-02-04T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.546996 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:31:50.286871306 +0000 UTC Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.575693 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.575798 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.575817 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.575842 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.575859 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:54Z","lastTransitionTime":"2026-02-04T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.678471 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.678525 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.678537 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.678557 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.678568 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:54Z","lastTransitionTime":"2026-02-04T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.781391 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.781470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.781483 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.781501 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.781527 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:54Z","lastTransitionTime":"2026-02-04T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.883951 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.884009 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.884022 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.884042 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.884054 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:54Z","lastTransitionTime":"2026-02-04T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.987966 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.988037 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.988054 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.988091 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:54 crc kubenswrapper[4728]: I0204 11:28:54.988112 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:54Z","lastTransitionTime":"2026-02-04T11:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.092236 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.092299 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.092310 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.092326 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.092338 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:55Z","lastTransitionTime":"2026-02-04T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.195104 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.195151 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.195164 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.195183 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.195194 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:55Z","lastTransitionTime":"2026-02-04T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.298052 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.298124 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.298142 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.298169 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.298191 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:55Z","lastTransitionTime":"2026-02-04T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.400904 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.400990 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.401024 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.401055 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.401075 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:55Z","lastTransitionTime":"2026-02-04T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.426629 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.426892 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.426966 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:29:59.426923385 +0000 UTC m=+148.569627830 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.427012 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.427060 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.427094 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:29:59.427071588 +0000 UTC m=+148.569776103 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.427239 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.427383 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.427446 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-04 11:29:59.427432746 +0000 UTC m=+148.570137141 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.427604 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.427622 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.427637 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.427698 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-04 11:29:59.427683312 +0000 UTC m=+148.570387827 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.504075 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.504141 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.504154 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.504174 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.504467 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:55Z","lastTransitionTime":"2026-02-04T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.528879 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.529173 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.529447 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.529471 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.529587 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-04 11:29:59.529530168 +0000 UTC m=+148.672234613 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.548056 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:04:16.068061302 +0000 UTC Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.553956 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.554002 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.553956 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.554035 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.554129 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.554242 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.554348 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:55 crc kubenswrapper[4728]: E0204 11:28:55.554381 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.607107 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.607174 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.607199 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.607232 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.607256 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:55Z","lastTransitionTime":"2026-02-04T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.714037 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.714135 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.714158 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.714184 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.714208 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:55Z","lastTransitionTime":"2026-02-04T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.816722 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.816786 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.816796 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.816813 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.816825 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:55Z","lastTransitionTime":"2026-02-04T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.919082 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.919348 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.919435 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.919513 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:55 crc kubenswrapper[4728]: I0204 11:28:55.919577 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:55Z","lastTransitionTime":"2026-02-04T11:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.022555 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.022823 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.022893 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.022975 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.023047 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:56Z","lastTransitionTime":"2026-02-04T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.124993 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.125044 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.125062 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.125080 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.125147 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:56Z","lastTransitionTime":"2026-02-04T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.228004 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.228392 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.228541 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.228689 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.228866 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:56Z","lastTransitionTime":"2026-02-04T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.333111 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.333175 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.333192 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.333213 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.333223 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:56Z","lastTransitionTime":"2026-02-04T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.436388 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.436438 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.436453 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.436470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.436484 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:56Z","lastTransitionTime":"2026-02-04T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.539485 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.539533 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.539546 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.539564 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.539577 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:56Z","lastTransitionTime":"2026-02-04T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.548390 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 10:07:32.133545543 +0000 UTC Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.642322 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.642419 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.642452 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.642483 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.642506 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:56Z","lastTransitionTime":"2026-02-04T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.746146 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.746218 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.746236 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.746263 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.746280 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:56Z","lastTransitionTime":"2026-02-04T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.849174 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.849238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.849255 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.849284 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.849302 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:56Z","lastTransitionTime":"2026-02-04T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.952344 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.952870 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.953046 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.953215 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:56 crc kubenswrapper[4728]: I0204 11:28:56.953365 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:56Z","lastTransitionTime":"2026-02-04T11:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.055669 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.056023 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.056043 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.056065 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.056079 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:57Z","lastTransitionTime":"2026-02-04T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.158837 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.158892 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.158904 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.158922 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.158933 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:57Z","lastTransitionTime":"2026-02-04T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.262137 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.262206 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.262226 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.262255 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.262276 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:57Z","lastTransitionTime":"2026-02-04T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.364907 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.364943 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.364953 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.364978 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.364991 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:57Z","lastTransitionTime":"2026-02-04T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.468209 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.468265 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.468278 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.468296 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.468310 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:57Z","lastTransitionTime":"2026-02-04T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.549051 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:00:32.603282025 +0000 UTC Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.554007 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.554050 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.554162 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.554394 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:57 crc kubenswrapper[4728]: E0204 11:28:57.554616 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:57 crc kubenswrapper[4728]: E0204 11:28:57.554814 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:57 crc kubenswrapper[4728]: E0204 11:28:57.554964 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:57 crc kubenswrapper[4728]: E0204 11:28:57.555076 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.572566 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.574540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.574601 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.574620 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.574648 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.574668 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:57Z","lastTransitionTime":"2026-02-04T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.677913 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.677966 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.677984 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.678012 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.678036 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:57Z","lastTransitionTime":"2026-02-04T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.781001 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.781083 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.781107 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.781142 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.781164 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:57Z","lastTransitionTime":"2026-02-04T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.883957 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.884009 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.884019 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.884037 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.884047 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:57Z","lastTransitionTime":"2026-02-04T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.987341 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.987401 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.987418 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.987445 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:57 crc kubenswrapper[4728]: I0204 11:28:57.987461 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:57Z","lastTransitionTime":"2026-02-04T11:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.090719 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.090781 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.090793 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.090811 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.090821 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:58Z","lastTransitionTime":"2026-02-04T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.194052 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.194100 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.194111 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.194132 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.194146 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:58Z","lastTransitionTime":"2026-02-04T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.297076 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.297716 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.297929 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.298106 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.298314 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:58Z","lastTransitionTime":"2026-02-04T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.401325 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.401828 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.402016 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.402217 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.402370 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:58Z","lastTransitionTime":"2026-02-04T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.505258 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.505332 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.505355 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.505385 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.505406 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:58Z","lastTransitionTime":"2026-02-04T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.549877 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:52:40.962637624 +0000 UTC Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.608818 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.608883 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.608905 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.608932 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.608950 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:58Z","lastTransitionTime":"2026-02-04T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.712119 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.712178 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.712194 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.712215 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.712229 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:58Z","lastTransitionTime":"2026-02-04T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.815431 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.815494 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.815511 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.815536 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.815554 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:58Z","lastTransitionTime":"2026-02-04T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.918542 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.918995 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.919225 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.919412 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:58 crc kubenswrapper[4728]: I0204 11:28:58.919570 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:58Z","lastTransitionTime":"2026-02-04T11:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.022979 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.023097 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.023118 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.023147 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.023170 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.090269 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.090344 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.090363 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.090390 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.090407 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: E0204 11:28:59.108730 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.115048 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.115120 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.115138 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.115162 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.115181 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: E0204 11:28:59.131873 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.136688 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.136781 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.136801 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.136826 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.136843 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: E0204 11:28:59.151408 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.156993 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.157055 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.157073 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.157098 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.157115 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: E0204 11:28:59.176287 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.181307 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.181355 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.181372 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.181407 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.181424 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: E0204 11:28:59.203146 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:28:59Z is after 2025-08-24T17:21:41Z" Feb 04 11:28:59 crc kubenswrapper[4728]: E0204 11:28:59.203370 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.205706 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.205780 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.205803 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.205825 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.205841 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.308618 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.308668 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.308679 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.308699 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.308711 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.411670 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.411735 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.411772 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.411793 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.411808 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.515717 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.515820 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.515839 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.515862 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.515878 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.550327 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:40:55.395045848 +0000 UTC Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.553913 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.553925 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.554062 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.554096 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:28:59 crc kubenswrapper[4728]: E0204 11:28:59.554255 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:28:59 crc kubenswrapper[4728]: E0204 11:28:59.554333 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:28:59 crc kubenswrapper[4728]: E0204 11:28:59.554709 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:28:59 crc kubenswrapper[4728]: E0204 11:28:59.554775 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.571344 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.618434 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.618495 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.618515 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.618538 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.618555 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.720582 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.720623 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.720633 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.720647 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.720657 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.823146 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.823175 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.823183 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.823196 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.823207 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.925375 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.925447 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.925465 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.925493 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:28:59 crc kubenswrapper[4728]: I0204 11:28:59.925509 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:28:59Z","lastTransitionTime":"2026-02-04T11:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.029374 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.029431 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.029449 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.029475 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.029492 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:00Z","lastTransitionTime":"2026-02-04T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.132460 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.132520 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.132533 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.132555 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.132567 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:00Z","lastTransitionTime":"2026-02-04T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.235146 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.235192 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.235201 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.235218 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.235228 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:00Z","lastTransitionTime":"2026-02-04T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.337836 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.337876 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.337886 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.337904 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.337914 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:00Z","lastTransitionTime":"2026-02-04T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.441035 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.441074 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.441085 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.441101 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.441111 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:00Z","lastTransitionTime":"2026-02-04T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.543139 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.543187 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.543198 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.543215 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.543226 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:00Z","lastTransitionTime":"2026-02-04T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.550641 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 22:27:15.160737105 +0000 UTC Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.645157 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.645209 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.645218 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.645233 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.645245 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:00Z","lastTransitionTime":"2026-02-04T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.747793 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.747845 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.747859 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.747875 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.747886 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:00Z","lastTransitionTime":"2026-02-04T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.850458 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.850508 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.850521 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.850539 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.850550 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:00Z","lastTransitionTime":"2026-02-04T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.953158 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.953205 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.953216 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.953234 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:00 crc kubenswrapper[4728]: I0204 11:29:00.953247 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:00Z","lastTransitionTime":"2026-02-04T11:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.055508 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.055576 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.055585 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.055602 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.055612 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:01Z","lastTransitionTime":"2026-02-04T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.158085 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.158180 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.158210 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.158241 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.158264 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:01Z","lastTransitionTime":"2026-02-04T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.261545 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.261624 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.261647 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.261679 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.261700 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:01Z","lastTransitionTime":"2026-02-04T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.365006 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.365064 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.365077 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.365098 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.365113 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:01Z","lastTransitionTime":"2026-02-04T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.468980 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.469056 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.469075 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.469106 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.469128 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:01Z","lastTransitionTime":"2026-02-04T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.550741 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:23:18.265362875 +0000 UTC Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.553210 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.553341 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.553456 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.553479 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:01 crc kubenswrapper[4728]: E0204 11:29:01.553616 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:01 crc kubenswrapper[4728]: E0204 11:29:01.553985 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:01 crc kubenswrapper[4728]: E0204 11:29:01.554084 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:01 crc kubenswrapper[4728]: E0204 11:29:01.553864 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.572491 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.572551 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.572573 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.572603 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.572625 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:01Z","lastTransitionTime":"2026-02-04T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.595450 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f14e6b-b2bf-4601-8e77-8357fc2e59a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c7d9a731ed5d1c8cdc76f061d2d2d89b7a67a0f5bedb1b293c5292030940de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628d6e455bcd6e8569b9b9fba591ff066202c5cfa8da3e0751cc132a08edb221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1442856ad417a9a8d16b8c8a48aa75dc9c0023bfa3563da0cd35698cd266e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d361a6a307259cfffbb9b0db8434558eb2ed21c48ca68cf29b69d9bbfac0d410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087c4568dee348585cdb9a2cc6b2ca4ad29678673df7c9795d6330a1d780b492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d16713485cc51cbaf2fcdfee53c58a94d76d1e87839108a14745a42efd0f46a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16713485cc51cbaf2fcdfee53c58a94d76d1e87839108a14745a42efd0f46a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f2e53d55216a6594516ef44a5e1a074366927648ce02cebd8557bd14573474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55f2e53d55216a6594516ef44a5e1a074366927648ce02cebd8557bd14573474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3bac0d562240b4b50ffe404e92451c62dbd5a554f04583953c3f9c2119091c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bac0d562240b4b50ffe404e92451c62dbd5a554f04583953c3f9c2119091c9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.615116 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.629033 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.643898 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.669971 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.675042 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.675105 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.675123 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.675151 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.675170 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:01Z","lastTransitionTime":"2026-02-04T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.691704 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.708790 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.721563 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.736097 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.752842 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.766597 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.778064 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.782106 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.782149 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.782161 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.782180 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.782193 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:01Z","lastTransitionTime":"2026-02-04T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.790354 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:44Z\\\",\\\"message\\\":\\\"2026-02-04T11:27:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033\\\\n2026-02-04T11:27:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033 to /host/opt/cni/bin/\\\\n2026-02-04T11:27:59Z [verbose] multus-daemon started\\\\n2026-02-04T11:27:59Z [verbose] Readiness Indicator file check\\\\n2026-02-04T11:28:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.807583 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.818194 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89352e74-0ab5-4c1b-8fce-0a513483ec0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94d799590335fcabe32dc02c41327fcbc320eeb2473371a6e0ece9fd6a072a65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7daa3791c1798436182e3caa17966b437685ca2bf98993f9ff1a1d497fd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7daa3791c1798436182e3caa17966b437685ca2bf98993f9ff1a1d497fd3b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.830990 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.845635 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.859536 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.884632 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.884671 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.884682 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.884699 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.884710 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:01Z","lastTransitionTime":"2026-02-04T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.901298 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:50Z\\\",\\\"message\\\":\\\"ned-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006b680b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-controller-manager-operator,},ClusterIP:10.217.5.58,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0204 11:28:50.561448 6824 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0204 11:28:50.561467 6824 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:01Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.986970 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.987021 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.987030 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.987048 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:01 crc kubenswrapper[4728]: I0204 11:29:01.987062 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:01Z","lastTransitionTime":"2026-02-04T11:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.091129 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.091223 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.091246 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.091286 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.091314 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:02Z","lastTransitionTime":"2026-02-04T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.194301 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.194352 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.194361 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.194376 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.194386 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:02Z","lastTransitionTime":"2026-02-04T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.296318 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.296363 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.296372 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.296386 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.296396 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:02Z","lastTransitionTime":"2026-02-04T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.398732 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.398821 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.398835 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.398852 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.398865 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:02Z","lastTransitionTime":"2026-02-04T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.500884 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.500933 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.500941 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.500956 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.500966 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:02Z","lastTransitionTime":"2026-02-04T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.552023 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:34:14.557157247 +0000 UTC Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.603528 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.603580 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.603592 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.603611 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.603623 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:02Z","lastTransitionTime":"2026-02-04T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.706366 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.706404 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.706416 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.706432 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.706444 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:02Z","lastTransitionTime":"2026-02-04T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.808878 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.808922 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.808934 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.808951 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.808961 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:02Z","lastTransitionTime":"2026-02-04T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.911597 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.911658 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.911673 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.911694 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:02 crc kubenswrapper[4728]: I0204 11:29:02.911708 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:02Z","lastTransitionTime":"2026-02-04T11:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.014534 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.014589 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.014601 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.014621 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.014633 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:03Z","lastTransitionTime":"2026-02-04T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.117176 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.117237 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.117254 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.117279 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.117293 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:03Z","lastTransitionTime":"2026-02-04T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.219876 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.219915 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.219924 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.219939 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.219947 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:03Z","lastTransitionTime":"2026-02-04T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.322460 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.322513 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.322526 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.322545 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.322556 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:03Z","lastTransitionTime":"2026-02-04T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.425343 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.425387 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.425396 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.425412 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.425423 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:03Z","lastTransitionTime":"2026-02-04T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.529629 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.529725 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.529784 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.529819 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.529854 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:03Z","lastTransitionTime":"2026-02-04T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.553411 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.553478 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.553586 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:39:54.729144861 +0000 UTC Feb 04 11:29:03 crc kubenswrapper[4728]: E0204 11:29:03.553657 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.553726 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:03 crc kubenswrapper[4728]: E0204 11:29:03.553865 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:03 crc kubenswrapper[4728]: E0204 11:29:03.554011 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.554062 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:03 crc kubenswrapper[4728]: E0204 11:29:03.554249 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.632599 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.632670 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.632682 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.632701 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.632713 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:03Z","lastTransitionTime":"2026-02-04T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.736130 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.736184 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.736193 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.736210 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.736219 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:03Z","lastTransitionTime":"2026-02-04T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.839609 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.839697 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.839843 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.839887 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.839926 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:03Z","lastTransitionTime":"2026-02-04T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.943429 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.943468 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.943486 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.943504 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:03 crc kubenswrapper[4728]: I0204 11:29:03.943517 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:03Z","lastTransitionTime":"2026-02-04T11:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.046476 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.046579 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.046599 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.046628 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.046645 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:04Z","lastTransitionTime":"2026-02-04T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.149404 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.149446 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.149454 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.149470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.149479 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:04Z","lastTransitionTime":"2026-02-04T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.251097 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.251372 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.251444 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.251568 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.251664 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:04Z","lastTransitionTime":"2026-02-04T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.354355 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.354405 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.354414 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.354431 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.354442 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:04Z","lastTransitionTime":"2026-02-04T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.457403 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.457470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.457485 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.457505 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.457518 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:04Z","lastTransitionTime":"2026-02-04T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.554155 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:45:04.755414277 +0000 UTC Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.560173 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.560222 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.560235 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.560254 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.560268 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:04Z","lastTransitionTime":"2026-02-04T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.662681 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.662771 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.662782 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.662803 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.662813 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:04Z","lastTransitionTime":"2026-02-04T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.765378 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.765432 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.765444 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.765463 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.765477 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:04Z","lastTransitionTime":"2026-02-04T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.867709 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.867744 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.867772 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.867787 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.867794 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:04Z","lastTransitionTime":"2026-02-04T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.970366 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.970416 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.970424 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.970441 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:04 crc kubenswrapper[4728]: I0204 11:29:04.970453 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:04Z","lastTransitionTime":"2026-02-04T11:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.072814 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.072865 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.072883 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.072909 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.072926 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:05Z","lastTransitionTime":"2026-02-04T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.175325 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.175379 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.175393 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.175414 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.175426 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:05Z","lastTransitionTime":"2026-02-04T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.278289 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.278326 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.278335 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.278349 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.278358 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:05Z","lastTransitionTime":"2026-02-04T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.380048 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.380127 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.380150 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.380181 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.380206 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:05Z","lastTransitionTime":"2026-02-04T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.482856 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.482993 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.483018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.483037 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.483049 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:05Z","lastTransitionTime":"2026-02-04T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.553846 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.553895 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:05 crc kubenswrapper[4728]: E0204 11:29:05.554015 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.554157 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.554242 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:15:54.940948817 +0000 UTC Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.554131 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:05 crc kubenswrapper[4728]: E0204 11:29:05.554320 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:05 crc kubenswrapper[4728]: E0204 11:29:05.554347 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:05 crc kubenswrapper[4728]: E0204 11:29:05.554400 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.586738 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.586822 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.586837 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.586857 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.586870 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:05Z","lastTransitionTime":"2026-02-04T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.688963 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.689018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.689032 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.689055 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.689072 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:05Z","lastTransitionTime":"2026-02-04T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.791366 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.791400 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.791408 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.791422 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.791431 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:05Z","lastTransitionTime":"2026-02-04T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.893950 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.894245 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.894331 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.894401 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.894468 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:05Z","lastTransitionTime":"2026-02-04T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.997984 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.998040 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.998054 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.998075 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:05 crc kubenswrapper[4728]: I0204 11:29:05.998088 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:05Z","lastTransitionTime":"2026-02-04T11:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.100315 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.100363 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.100372 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.100389 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.100398 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:06Z","lastTransitionTime":"2026-02-04T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.202861 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.202905 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.202919 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.202941 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.202955 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:06Z","lastTransitionTime":"2026-02-04T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.305607 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.305664 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.305673 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.305688 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.305697 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:06Z","lastTransitionTime":"2026-02-04T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.408156 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.408200 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.408210 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.408224 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.408234 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:06Z","lastTransitionTime":"2026-02-04T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.511118 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.511182 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.511209 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.511235 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.511257 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:06Z","lastTransitionTime":"2026-02-04T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.554349 4728 scope.go:117] "RemoveContainer" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" Feb 04 11:29:06 crc kubenswrapper[4728]: E0204 11:29:06.554699 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.554770 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:48:59.285400992 +0000 UTC Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.614572 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.614613 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.614626 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.614642 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.614653 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:06Z","lastTransitionTime":"2026-02-04T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.717685 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.717797 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.717809 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.717831 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.717843 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:06Z","lastTransitionTime":"2026-02-04T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.821420 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.821494 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.821509 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.821534 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.821549 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:06Z","lastTransitionTime":"2026-02-04T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.924071 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.924113 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.924125 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.924144 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:06 crc kubenswrapper[4728]: I0204 11:29:06.924156 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:06Z","lastTransitionTime":"2026-02-04T11:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.027721 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.027792 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.027805 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.027822 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.027842 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:07Z","lastTransitionTime":"2026-02-04T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.131335 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.131400 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.131418 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.131443 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.131461 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:07Z","lastTransitionTime":"2026-02-04T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.234280 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.234336 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.234346 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.234363 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.234374 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:07Z","lastTransitionTime":"2026-02-04T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.337065 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.337136 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.337157 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.337181 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.337197 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:07Z","lastTransitionTime":"2026-02-04T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.439714 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.439764 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.439777 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.439792 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.439801 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:07Z","lastTransitionTime":"2026-02-04T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.542080 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.542124 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.542133 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.542150 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.542160 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:07Z","lastTransitionTime":"2026-02-04T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.553692 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.553737 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.553825 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:07 crc kubenswrapper[4728]: E0204 11:29:07.553860 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.553869 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:07 crc kubenswrapper[4728]: E0204 11:29:07.553967 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:07 crc kubenswrapper[4728]: E0204 11:29:07.554113 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:07 crc kubenswrapper[4728]: E0204 11:29:07.554199 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.555017 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:37:57.336809132 +0000 UTC Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.644377 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.644429 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.644442 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.644460 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.644472 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:07Z","lastTransitionTime":"2026-02-04T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.748169 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.748221 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.748236 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.748261 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.748277 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:07Z","lastTransitionTime":"2026-02-04T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.851170 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.851234 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.851246 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.851267 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.851279 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:07Z","lastTransitionTime":"2026-02-04T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.954096 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.954160 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.954179 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.954206 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:07 crc kubenswrapper[4728]: I0204 11:29:07.954225 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:07Z","lastTransitionTime":"2026-02-04T11:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.056342 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.056407 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.056420 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.056442 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.056456 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:08Z","lastTransitionTime":"2026-02-04T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.159422 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.159458 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.159469 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.159486 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.159498 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:08Z","lastTransitionTime":"2026-02-04T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.262812 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.262868 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.262879 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.262898 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.262908 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:08Z","lastTransitionTime":"2026-02-04T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.365334 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.365406 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.365420 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.365447 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.365467 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:08Z","lastTransitionTime":"2026-02-04T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.467639 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.467693 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.467706 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.467727 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.467739 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:08Z","lastTransitionTime":"2026-02-04T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.555965 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:20:50.157995401 +0000 UTC Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.571120 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.571471 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.571489 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.571514 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.571532 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:08Z","lastTransitionTime":"2026-02-04T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.673717 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.673800 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.673813 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.673831 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.673843 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:08Z","lastTransitionTime":"2026-02-04T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.776888 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.776949 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.776969 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.777028 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.777043 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:08Z","lastTransitionTime":"2026-02-04T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.880285 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.880361 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.880380 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.880412 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.880431 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:08Z","lastTransitionTime":"2026-02-04T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.986414 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.986506 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.986908 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.987366 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:08 crc kubenswrapper[4728]: I0204 11:29:08.987397 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:08Z","lastTransitionTime":"2026-02-04T11:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.090921 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.090981 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.090993 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.091042 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.091058 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.194896 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.194964 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.194978 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.195000 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.195014 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.297468 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.297528 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.297547 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.297568 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.297580 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.353785 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.353833 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.353843 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.353860 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.353868 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: E0204 11:29:09.368114 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.371800 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.371858 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.371871 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.371890 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.372197 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: E0204 11:29:09.385431 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.389501 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.389545 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.389556 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.389575 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.389586 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: E0204 11:29:09.401966 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.406188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.406260 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.406276 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.406299 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.406316 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: E0204 11:29:09.420073 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.424403 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.424453 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.424464 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.424483 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.424496 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: E0204 11:29:09.436575 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-04T11:29:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f12b397-1ee0-403e-83d4-9817c484418d\\\",\\\"systemUUID\\\":\\\"66cbc6ec-a45e-4a6f-aa22-486f7addb0e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:09Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:09 crc kubenswrapper[4728]: E0204 11:29:09.436817 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.438310 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.438348 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.438365 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.438388 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.438403 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.540526 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.540591 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.540605 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.540624 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.540637 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.553079 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.553079 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.553159 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.553189 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:09 crc kubenswrapper[4728]: E0204 11:29:09.553378 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:09 crc kubenswrapper[4728]: E0204 11:29:09.553430 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:09 crc kubenswrapper[4728]: E0204 11:29:09.553519 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:09 crc kubenswrapper[4728]: E0204 11:29:09.553572 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.556165 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:44:16.197512725 +0000 UTC Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.676969 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.677037 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.677051 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.677095 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.677111 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.780207 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.780247 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.780255 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.780270 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.780279 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.883152 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.883216 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.883241 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.883271 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.883295 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.985380 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.985416 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.985425 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.985439 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:09 crc kubenswrapper[4728]: I0204 11:29:09.985449 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:09Z","lastTransitionTime":"2026-02-04T11:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.088253 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.088308 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.088338 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.088364 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.088381 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:10Z","lastTransitionTime":"2026-02-04T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.191581 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.191639 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.191656 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.191680 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.191696 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:10Z","lastTransitionTime":"2026-02-04T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.294412 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.294472 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.294490 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.294512 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.294526 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:10Z","lastTransitionTime":"2026-02-04T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.397024 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.397158 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.397184 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.397204 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.397216 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:10Z","lastTransitionTime":"2026-02-04T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.500465 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.500511 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.500521 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.500544 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.500553 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:10Z","lastTransitionTime":"2026-02-04T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.556515 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:39:46.734139474 +0000 UTC Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.603400 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.603467 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.603482 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.603508 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.603527 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:10Z","lastTransitionTime":"2026-02-04T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.707047 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.707130 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.707149 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.707175 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.707192 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:10Z","lastTransitionTime":"2026-02-04T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.810134 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.810215 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.810247 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.810279 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.810303 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:10Z","lastTransitionTime":"2026-02-04T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.913252 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.913297 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.913309 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.913329 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:10 crc kubenswrapper[4728]: I0204 11:29:10.913343 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:10Z","lastTransitionTime":"2026-02-04T11:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.016194 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.016237 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.016247 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.016263 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.016272 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:11Z","lastTransitionTime":"2026-02-04T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.118512 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.118599 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.118616 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.118633 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.118668 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:11Z","lastTransitionTime":"2026-02-04T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.221677 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.221782 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.221808 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.221828 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.221838 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:11Z","lastTransitionTime":"2026-02-04T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.325291 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.325349 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.325359 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.325376 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.325388 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:11Z","lastTransitionTime":"2026-02-04T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.428842 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.428911 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.428933 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.428964 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.428984 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:11Z","lastTransitionTime":"2026-02-04T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.531657 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.531722 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.531741 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.531810 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.531839 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:11Z","lastTransitionTime":"2026-02-04T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.553545 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.554009 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:11 crc kubenswrapper[4728]: E0204 11:29:11.554228 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.554335 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:11 crc kubenswrapper[4728]: E0204 11:29:11.554430 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.554515 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:11 crc kubenswrapper[4728]: E0204 11:29:11.554590 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:11 crc kubenswrapper[4728]: E0204 11:29:11.554014 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.556640 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:40:30.251410913 +0000 UTC Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.591007 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f14e6b-b2bf-4601-8e77-8357fc2e59a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10c7d9a731ed5d1c8cdc76f061d2d2d89b7a67a0f5bedb1b293c5292030940de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628d6e455bcd6e8569b9b9fba591ff066202c5cfa8da3e0751cc132a08edb221\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1442856ad417a9a8d16b8c8a48aa75dc9c0023bfa3563da0cd35698cd266e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d361a6a307259cfffbb9b0db8434558eb2ed21c48ca68cf29b69d9bbfac0d410\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://087c4568dee348585cdb9a2cc6b2ca4ad29678673df7c9795d6330a1d780b492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d16713485cc51cbaf2fcdfee53c58a94d76d1e87839108a14745a42efd0f46a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16713485cc51cbaf2fcdfee53c58a94d76d1e87839108a14745a42efd0f46a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f2e53d55216a6594516ef44a5e1a074366927648ce02cebd8557bd14573474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55f2e53d55216a6594516ef44a5e1a074366927648ce02cebd8557bd14573474\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3bac0d562240b4b50ffe404e92451c62dbd5a554f04583953c3f9c2119091c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bac0d562240b4b50ffe404e92451c62dbd5a554f04583953c3f9c2119091c9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.615607 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82876f1aba6589b57d275cf8c27fa3c18fd041c9fe9b7c7fbc7828ad8d1f3cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://674f96584834d44803a16a020604b3bad60b397f2c5e70b239c86f0ccb1bc13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.630100 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlf2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83fdeccf-dd9f-4c93-bece-3382f3f4898f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6c2b46522de5ffd1ec6fd62ed7d9a9badd1fa56f50ca94c1cf420dbfdff6b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67gmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlf2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.637128 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.637193 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.637214 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.637240 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.637257 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:11Z","lastTransitionTime":"2026-02-04T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.648030 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd2519d-be03-457c-b9d6-70862115f6a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2pqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q6m9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.665435 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"file observer\\\\nW0204 11:27:51.266483 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0204 11:27:51.266799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0204 11:27:51.267862 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-309258506/tls.crt::/tmp/serving-cert-309258506/tls.key\\\\\\\"\\\\nI0204 11:27:51.612201 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0204 11:27:51.617093 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0204 11:27:51.617118 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0204 11:27:51.617144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0204 11:27:51.617151 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0204 11:27:51.640330 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0204 11:27:51.640360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640366 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0204 11:27:51.640371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0204 11:27:51.640375 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0204 11:27:51.640378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0204 11:27:51.640382 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0204 11:27:51.640534 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0204 11:27:51.643961 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.684330 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.701521 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8409df-def9-46a0-a813-6788ddf1e292\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bdb21311228dc34a48bd143205203ecccfc310e039a665c68840d2037e5080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d86n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grzvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.715445 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hxdks" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a1278ce-9dcb-4501-bb81-c0fa0f4fbbc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e577a7dc6fec5c001ea95717179fa96d1cde31cd543522beee88fb258e015cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl8sk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hxdks\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.732151 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a52364b-5c09-4b77-95c3-7d9a7488afea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde6f9faaf63f3a2ea395efe5d2d550c28cd5ca3c0bcdcfdea8b9eef9d5d07ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://416be36377201562a463d23953b7486f3fe1a9b532a474016ca4a8ecffad97d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4j2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:28:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rm2jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.739899 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.739952 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.739968 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.739994 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.740010 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:11Z","lastTransitionTime":"2026-02-04T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.752680 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5641b69-19b1-4d41-bbd9-72bb22b60a54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97bcc313dddc8d3b460df6cc8d3e1d401dedcb5c80c7c42c97e723f654181297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fc2148b409e47bbbb07f43418f6ff88b7aeef8d603ff48c3c03d2a8b798fab5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a01f937f8465f3f2bbd9bf02c3a7a30cf4f3519a17a43e95f152cb639d3ea1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.775131 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6dd426ad6042bab6ed856a3ce9f1e3007ef419a32b25f1e6cb0fd1ed0d69bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.793199 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.811974 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dc6rd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbc56be-abfc-4180-870e-f4c19bd09f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:44Z\\\",\\\"message\\\":\\\"2026-02-04T11:27:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033\\\\n2026-02-04T11:27:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcf100bf-6408-461a-8c1f-92e795711033 to /host/opt/cni/bin/\\\\n2026-02-04T11:27:59Z [verbose] multus-daemon started\\\\n2026-02-04T11:27:59Z [verbose] Readiness Indicator file check\\\\n2026-02-04T11:28:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n64p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dc6rd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.836668 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b010d460-72d6-4943-9230-8750e91ef21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a71bd02cf28c4e240cbe09a8bfc2905bbfa661099257ef7314d470b617a078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ff0d0d195dbb4fce867c9bb4d8504b07d66f6fbd8f5a1e2c1cfbd306bed8cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37eafc4c84e8ce5ef896c2d942231545bafc9597881a66c5042b69ca7513624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7511437e4f521ed0d0c6087f7fbbe94b9847f51f0023910ac4ccacc3a88ae2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d806c776d3be16d6eb66ad000125066857cc63811c9efe253a66b0e0a27199f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cada77fff1efb19ac1f0630a8c747d203be9fe516cd00f0e71a843596f16230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8163714bae0d7a2830d5f3e5cabfc4db7be8b831e3bc9b5434fb46ac6207e4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:28:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn9nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gcj4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.842255 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.842291 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.842315 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.842344 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.842361 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:11Z","lastTransitionTime":"2026-02-04T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.853500 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89352e74-0ab5-4c1b-8fce-0a513483ec0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94d799590335fcabe32dc02c41327fcbc320eeb2473371a6e0ece9fd6a072a65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7daa3791c1798436182e3caa17966b437685ca2bf98993f9ff1a1d497fd3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7daa3791c1798436182e3caa17966b437685ca2bf98993f9ff1a1d497fd3b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.871802 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdb3d38-4351-46dd-aed8-8a801475e407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e84ba2475d182d976c3325c5c6d085da785ea2f0e585efc0bd13fb236115af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c9d55223b6c31a2f2bd7274585289dd7b7c3f81a7cc5b41a078e70159e776f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9eec5c80a99930c1cd81b792654bf3855e2927699d0f558e59b649235db1c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5555d6ee1af424b30fd6a4b4957274b80c2550555ac785e489600f99317c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.887356 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e900e248d070c34770a958c37e15ffe00e851f93a8031d3ded20bbbeb735ba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.900810 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.922495 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e963298-5c99-4db8-bdba-88187d4b0018\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-04T11:28:50Z\\\",\\\"message\\\":\\\"ned-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006b680b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-controller-manager-operator,},ClusterIP:10.217.5.58,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0204 11:28:50.561448 6824 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0204 11:28:50.561467 6824 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-04T11:28:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-04T11:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-04T11:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-04T11:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tp28q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-04T11:27:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c6r5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-04T11:29:11Z is after 2025-08-24T17:21:41Z" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.944538 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.944633 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.944645 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.944662 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:11 crc kubenswrapper[4728]: I0204 11:29:11.944671 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:11Z","lastTransitionTime":"2026-02-04T11:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.051479 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.051587 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.051623 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.051651 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.051677 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:12Z","lastTransitionTime":"2026-02-04T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.154680 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.154825 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.154847 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.154874 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.154896 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:12Z","lastTransitionTime":"2026-02-04T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.257693 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.257771 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.257787 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.257809 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.257824 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:12Z","lastTransitionTime":"2026-02-04T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.359596 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.359651 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.359660 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.359680 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.359689 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:12Z","lastTransitionTime":"2026-02-04T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.462834 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.462886 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.462899 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.462917 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.462930 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:12Z","lastTransitionTime":"2026-02-04T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.557863 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:43:38.377954679 +0000 UTC Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.566499 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.566609 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.566629 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.566657 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.566678 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:12Z","lastTransitionTime":"2026-02-04T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.670253 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.670317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.670335 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.670359 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.670375 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:12Z","lastTransitionTime":"2026-02-04T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.773216 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.773275 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.773671 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.773726 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.773748 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:12Z","lastTransitionTime":"2026-02-04T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.877951 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.878023 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.878041 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.878071 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.878095 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:12Z","lastTransitionTime":"2026-02-04T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.981574 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.981665 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.981689 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.981719 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:12 crc kubenswrapper[4728]: I0204 11:29:12.981736 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:12Z","lastTransitionTime":"2026-02-04T11:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.084446 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.084523 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.084545 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.084579 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.084602 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:13Z","lastTransitionTime":"2026-02-04T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.187815 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.187888 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.187899 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.187920 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.187936 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:13Z","lastTransitionTime":"2026-02-04T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.290274 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.290322 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.290331 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.290351 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.290362 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:13Z","lastTransitionTime":"2026-02-04T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.393165 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.393216 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.393228 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.393246 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.393258 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:13Z","lastTransitionTime":"2026-02-04T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.495077 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.495119 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.495130 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.495150 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.495161 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:13Z","lastTransitionTime":"2026-02-04T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.553057 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:13 crc kubenswrapper[4728]: E0204 11:29:13.553200 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.553218 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.553373 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:13 crc kubenswrapper[4728]: E0204 11:29:13.553431 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:13 crc kubenswrapper[4728]: E0204 11:29:13.553677 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.553052 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:13 crc kubenswrapper[4728]: E0204 11:29:13.553919 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.558977 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:14:19.647929829 +0000 UTC Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.598476 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.598534 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.598545 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.598562 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.598573 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:13Z","lastTransitionTime":"2026-02-04T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.700140 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.700214 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.700224 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.700238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.700247 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:13Z","lastTransitionTime":"2026-02-04T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.802314 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.802368 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.802382 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.802402 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.802415 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:13Z","lastTransitionTime":"2026-02-04T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.905103 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.905146 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.905155 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.905171 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:13 crc kubenswrapper[4728]: I0204 11:29:13.905187 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:13Z","lastTransitionTime":"2026-02-04T11:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.008393 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.008438 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.008447 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.008467 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.008477 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:14Z","lastTransitionTime":"2026-02-04T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.111187 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.111252 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.111274 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.111302 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.111319 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:14Z","lastTransitionTime":"2026-02-04T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.214274 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.214334 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.214352 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.214379 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.214405 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:14Z","lastTransitionTime":"2026-02-04T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.317632 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.317722 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.317746 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.317822 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.317847 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:14Z","lastTransitionTime":"2026-02-04T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.422852 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.422960 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.422987 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.423033 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.423075 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:14Z","lastTransitionTime":"2026-02-04T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.525540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.525577 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.525585 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.525601 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.525610 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:14Z","lastTransitionTime":"2026-02-04T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.559973 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:45:31.33993243 +0000 UTC Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.628150 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.628200 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.628208 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.628230 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.628262 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:14Z","lastTransitionTime":"2026-02-04T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.731803 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.731861 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.731872 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.731893 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.731909 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:14Z","lastTransitionTime":"2026-02-04T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.833938 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.833986 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.834001 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.834020 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.834032 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:14Z","lastTransitionTime":"2026-02-04T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.937183 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.937323 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.937356 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.937388 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:14 crc kubenswrapper[4728]: I0204 11:29:14.937414 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:14Z","lastTransitionTime":"2026-02-04T11:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.040655 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.040728 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.040800 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.040847 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.040873 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:15Z","lastTransitionTime":"2026-02-04T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.144702 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.144779 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.144797 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.144819 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.144830 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:15Z","lastTransitionTime":"2026-02-04T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.247573 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.247637 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.247655 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.247680 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.247704 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:15Z","lastTransitionTime":"2026-02-04T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.350446 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.350521 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.350542 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.350577 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.350599 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:15Z","lastTransitionTime":"2026-02-04T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.367375 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:15 crc kubenswrapper[4728]: E0204 11:29:15.367591 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:29:15 crc kubenswrapper[4728]: E0204 11:29:15.367667 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs podName:8fd2519d-be03-457c-b9d6-70862115f6a9 nodeName:}" failed. No retries permitted until 2026-02-04 11:30:19.367647966 +0000 UTC m=+168.510352351 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs") pod "network-metrics-daemon-q6m9t" (UID: "8fd2519d-be03-457c-b9d6-70862115f6a9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.453469 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.453526 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.453543 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.453562 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.453574 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:15Z","lastTransitionTime":"2026-02-04T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.553095 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.553142 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.553193 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.553367 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:15 crc kubenswrapper[4728]: E0204 11:29:15.553358 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:15 crc kubenswrapper[4728]: E0204 11:29:15.553493 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:15 crc kubenswrapper[4728]: E0204 11:29:15.553622 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:15 crc kubenswrapper[4728]: E0204 11:29:15.553845 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.555465 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.555494 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.555504 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.555520 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.555530 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:15Z","lastTransitionTime":"2026-02-04T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.560303 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:28:46.196464452 +0000 UTC Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.657843 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.657893 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.657903 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.657919 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.657931 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:15Z","lastTransitionTime":"2026-02-04T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.760627 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.760712 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.760732 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.760849 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.760891 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:15Z","lastTransitionTime":"2026-02-04T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.863392 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.863453 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.863472 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.863499 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.863519 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:15Z","lastTransitionTime":"2026-02-04T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.966248 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.966288 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.966297 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.966312 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:15 crc kubenswrapper[4728]: I0204 11:29:15.966320 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:15Z","lastTransitionTime":"2026-02-04T11:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.069536 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.069583 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.069595 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.069614 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.069627 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:16Z","lastTransitionTime":"2026-02-04T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.173132 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.173188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.173198 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.173215 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.173224 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:16Z","lastTransitionTime":"2026-02-04T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.276225 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.276292 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.276309 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.276336 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.276352 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:16Z","lastTransitionTime":"2026-02-04T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.378546 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.378608 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.378619 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.378637 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.378650 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:16Z","lastTransitionTime":"2026-02-04T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.481219 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.481276 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.481288 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.481308 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.481321 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:16Z","lastTransitionTime":"2026-02-04T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.560541 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:11:11.02688035 +0000 UTC Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.583801 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.583869 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.583889 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.583917 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.583937 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:16Z","lastTransitionTime":"2026-02-04T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.686699 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.686739 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.686788 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.686834 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.686862 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:16Z","lastTransitionTime":"2026-02-04T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.790064 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.790189 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.790214 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.790245 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.790271 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:16Z","lastTransitionTime":"2026-02-04T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.893530 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.893598 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.893622 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.893652 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.893678 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:16Z","lastTransitionTime":"2026-02-04T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.995890 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.995955 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.995974 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.996004 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:16 crc kubenswrapper[4728]: I0204 11:29:16.996021 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:16Z","lastTransitionTime":"2026-02-04T11:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.099228 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.099302 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.099326 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.099357 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.099378 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:17Z","lastTransitionTime":"2026-02-04T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.201927 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.201992 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.202004 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.202025 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.202046 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:17Z","lastTransitionTime":"2026-02-04T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.304692 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.304737 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.304772 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.304794 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.304805 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:17Z","lastTransitionTime":"2026-02-04T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.407450 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.407497 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.407509 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.407529 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.407544 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:17Z","lastTransitionTime":"2026-02-04T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.509837 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.509915 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.509938 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.509969 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.509991 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:17Z","lastTransitionTime":"2026-02-04T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.553433 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:17 crc kubenswrapper[4728]: E0204 11:29:17.553627 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.553978 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.554003 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.554070 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:17 crc kubenswrapper[4728]: E0204 11:29:17.554232 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:17 crc kubenswrapper[4728]: E0204 11:29:17.554531 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:17 crc kubenswrapper[4728]: E0204 11:29:17.554742 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.561557 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:34:28.760735032 +0000 UTC Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.613549 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.613598 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.613610 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.613630 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.613642 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:17Z","lastTransitionTime":"2026-02-04T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.718507 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.718586 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.718613 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.718642 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.718663 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:17Z","lastTransitionTime":"2026-02-04T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.821540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.821621 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.821644 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.821674 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.821699 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:17Z","lastTransitionTime":"2026-02-04T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.924667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.924718 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.924729 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.924748 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:17 crc kubenswrapper[4728]: I0204 11:29:17.924773 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:17Z","lastTransitionTime":"2026-02-04T11:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.027624 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.027682 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.027690 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.027704 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.027713 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:18Z","lastTransitionTime":"2026-02-04T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.130201 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.130236 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.130247 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.130265 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.130278 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:18Z","lastTransitionTime":"2026-02-04T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.233420 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.233458 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.233468 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.233486 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.233497 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:18Z","lastTransitionTime":"2026-02-04T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.336103 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.336145 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.336156 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.336176 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.336188 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:18Z","lastTransitionTime":"2026-02-04T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.438115 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.438152 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.438162 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.438180 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.438194 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:18Z","lastTransitionTime":"2026-02-04T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.540267 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.540303 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.540312 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.540326 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.540335 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:18Z","lastTransitionTime":"2026-02-04T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.562528 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:36:11.748646077 +0000 UTC Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.643118 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.643208 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.643220 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.643237 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.643248 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:18Z","lastTransitionTime":"2026-02-04T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.746399 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.746448 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.746463 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.746484 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.746497 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:18Z","lastTransitionTime":"2026-02-04T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.848809 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.848885 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.848903 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.849121 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.849139 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:18Z","lastTransitionTime":"2026-02-04T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.951369 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.951435 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.951446 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.951461 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:18 crc kubenswrapper[4728]: I0204 11:29:18.951475 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:18Z","lastTransitionTime":"2026-02-04T11:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.054614 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.054667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.054678 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.054698 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.054709 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:19Z","lastTransitionTime":"2026-02-04T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.157883 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.157960 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.157990 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.158019 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.158039 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:19Z","lastTransitionTime":"2026-02-04T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.260560 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.260602 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.260611 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.260627 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.260637 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:19Z","lastTransitionTime":"2026-02-04T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.363394 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.363489 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.363500 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.363519 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.363530 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:19Z","lastTransitionTime":"2026-02-04T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.465852 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.465895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.465905 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.465923 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.465934 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:19Z","lastTransitionTime":"2026-02-04T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.552859 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.552875 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:19 crc kubenswrapper[4728]: E0204 11:29:19.553034 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.552886 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.552886 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:19 crc kubenswrapper[4728]: E0204 11:29:19.553166 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:19 crc kubenswrapper[4728]: E0204 11:29:19.553291 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:19 crc kubenswrapper[4728]: E0204 11:29:19.553374 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.562906 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:02:55.389649737 +0000 UTC Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.568395 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.568436 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.568445 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.568457 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.568466 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:19Z","lastTransitionTime":"2026-02-04T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.581717 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.581774 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.581787 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.581801 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.581813 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-04T11:29:19Z","lastTransitionTime":"2026-02-04T11:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.634612 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn"] Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.635300 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.639414 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.639994 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.640512 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.642339 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.662875 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rm2jw" podStartSLOduration=82.662854663 podStartE2EDuration="1m22.662854663s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:19.662810602 +0000 UTC m=+108.805514987" watchObservedRunningTime="2026-02-04 11:29:19.662854663 +0000 UTC m=+108.805559068" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.695731 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.695711713 podStartE2EDuration="1m27.695711713s" podCreationTimestamp="2026-02-04 11:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:19.682144337 +0000 UTC m=+108.824848712" watchObservedRunningTime="2026-02-04 11:29:19.695711713 +0000 UTC m=+108.838416098" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.714339 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d03267fa-bef6-4e47-8d83-c40bbe4a0630-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.714384 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d03267fa-bef6-4e47-8d83-c40bbe4a0630-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.714427 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d03267fa-bef6-4e47-8d83-c40bbe4a0630-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.714450 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d03267fa-bef6-4e47-8d83-c40bbe4a0630-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.714466 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d03267fa-bef6-4e47-8d83-c40bbe4a0630-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.716355 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podStartSLOduration=82.716330738 podStartE2EDuration="1m22.716330738s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:19.715416817 +0000 UTC m=+108.858121232" watchObservedRunningTime="2026-02-04 11:29:19.716330738 +0000 UTC m=+108.859035153" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.726666 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hxdks" podStartSLOduration=82.72663682 podStartE2EDuration="1m22.72663682s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:19.726298393 +0000 UTC m=+108.869002798" watchObservedRunningTime="2026-02-04 11:29:19.72663682 +0000 UTC m=+108.869341235" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.746556 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gcj4t" podStartSLOduration=82.746537309 podStartE2EDuration="1m22.746537309s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:19.746071218 +0000 UTC m=+108.888775623" watchObservedRunningTime="2026-02-04 11:29:19.746537309 +0000 UTC m=+108.889241714" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.787853 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.7878331 podStartE2EDuration="1m28.7878331s" podCreationTimestamp="2026-02-04 11:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:19.776211408 +0000 UTC m=+108.918915793" watchObservedRunningTime="2026-02-04 11:29:19.7878331 +0000 UTC m=+108.930537485" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.810912 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dc6rd" podStartSLOduration=82.810893279 podStartE2EDuration="1m22.810893279s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:19.810596783 +0000 UTC m=+108.953301168" watchObservedRunningTime="2026-02-04 11:29:19.810893279 +0000 UTC m=+108.953597664" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.815624 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d03267fa-bef6-4e47-8d83-c40bbe4a0630-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.815683 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d03267fa-bef6-4e47-8d83-c40bbe4a0630-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.815703 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d03267fa-bef6-4e47-8d83-c40bbe4a0630-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.815744 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d03267fa-bef6-4e47-8d83-c40bbe4a0630-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.815786 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d03267fa-bef6-4e47-8d83-c40bbe4a0630-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.815830 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d03267fa-bef6-4e47-8d83-c40bbe4a0630-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.815833 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d03267fa-bef6-4e47-8d83-c40bbe4a0630-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.816651 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d03267fa-bef6-4e47-8d83-c40bbe4a0630-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.824965 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d03267fa-bef6-4e47-8d83-c40bbe4a0630-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.832124 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d03267fa-bef6-4e47-8d83-c40bbe4a0630-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lpmnn\" (UID: \"d03267fa-bef6-4e47-8d83-c40bbe4a0630\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.852434 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.852362564 podStartE2EDuration="20.852362564s" podCreationTimestamp="2026-02-04 11:28:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:19.852212171 +0000 UTC m=+108.994916556" watchObservedRunningTime="2026-02-04 11:29:19.852362564 +0000 UTC m=+108.995066949" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.876849 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.876820725 podStartE2EDuration="1m1.876820725s" podCreationTimestamp="2026-02-04 11:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:19.863068396 +0000 UTC m=+109.005772791" watchObservedRunningTime="2026-02-04 11:29:19.876820725 +0000 UTC m=+109.019525130" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.916229 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=22.916210754 podStartE2EDuration="22.916210754s" podCreationTimestamp="2026-02-04 11:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:19.915987188 +0000 UTC m=+109.058691583" watchObservedRunningTime="2026-02-04 11:29:19.916210754 +0000 UTC m=+109.058915139" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.941077 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tlf2v" podStartSLOduration=82.941054453 podStartE2EDuration="1m22.941054453s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:19.93955438 +0000 UTC m=+109.082258765" watchObservedRunningTime="2026-02-04 11:29:19.941054453 +0000 UTC m=+109.083758838" Feb 04 11:29:19 crc kubenswrapper[4728]: I0204 11:29:19.962801 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" Feb 04 11:29:20 crc kubenswrapper[4728]: I0204 11:29:20.554319 4728 scope.go:117] "RemoveContainer" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" Feb 04 11:29:20 crc kubenswrapper[4728]: E0204 11:29:20.554666 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c6r5d_openshift-ovn-kubernetes(0e963298-5c99-4db8-bdba-88187d4b0018)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" Feb 04 11:29:20 crc kubenswrapper[4728]: I0204 11:29:20.563809 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:18:56.172639535 +0000 UTC Feb 04 11:29:20 crc kubenswrapper[4728]: I0204 11:29:20.563866 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 04 11:29:20 crc kubenswrapper[4728]: I0204 11:29:20.571601 4728 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 04 11:29:20 crc kubenswrapper[4728]: I0204 11:29:20.717329 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" event={"ID":"d03267fa-bef6-4e47-8d83-c40bbe4a0630","Type":"ContainerStarted","Data":"da3648a3404249d780ed0b004deaf491960d3047be801ff3387f8cbe32b83fd5"} Feb 04 11:29:20 crc kubenswrapper[4728]: I0204 11:29:20.717413 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" event={"ID":"d03267fa-bef6-4e47-8d83-c40bbe4a0630","Type":"ContainerStarted","Data":"864979fb4ce4413004ee6607587bb66ddde25709c5a69ec55226f8b2cacfe6f0"} Feb 04 11:29:20 crc kubenswrapper[4728]: I0204 11:29:20.733309 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lpmnn" podStartSLOduration=83.733291533 podStartE2EDuration="1m23.733291533s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:20.730156303 +0000 UTC m=+109.872860688" watchObservedRunningTime="2026-02-04 11:29:20.733291533 +0000 UTC m=+109.875995918" Feb 04 11:29:21 crc kubenswrapper[4728]: I0204 11:29:21.553259 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:21 crc kubenswrapper[4728]: I0204 11:29:21.553286 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:21 crc kubenswrapper[4728]: I0204 11:29:21.553268 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:21 crc kubenswrapper[4728]: I0204 11:29:21.553362 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:21 crc kubenswrapper[4728]: E0204 11:29:21.554264 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:21 crc kubenswrapper[4728]: E0204 11:29:21.554493 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:21 crc kubenswrapper[4728]: E0204 11:29:21.554840 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:21 crc kubenswrapper[4728]: E0204 11:29:21.555183 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:23 crc kubenswrapper[4728]: I0204 11:29:23.553709 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:23 crc kubenswrapper[4728]: I0204 11:29:23.553742 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:23 crc kubenswrapper[4728]: I0204 11:29:23.553789 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:23 crc kubenswrapper[4728]: I0204 11:29:23.553742 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:23 crc kubenswrapper[4728]: E0204 11:29:23.554053 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:23 crc kubenswrapper[4728]: E0204 11:29:23.554153 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:23 crc kubenswrapper[4728]: E0204 11:29:23.553938 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:23 crc kubenswrapper[4728]: E0204 11:29:23.554238 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:25 crc kubenswrapper[4728]: I0204 11:29:25.553487 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:25 crc kubenswrapper[4728]: I0204 11:29:25.553507 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:25 crc kubenswrapper[4728]: I0204 11:29:25.553827 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:25 crc kubenswrapper[4728]: E0204 11:29:25.553722 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:25 crc kubenswrapper[4728]: I0204 11:29:25.553529 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:25 crc kubenswrapper[4728]: E0204 11:29:25.553939 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:25 crc kubenswrapper[4728]: E0204 11:29:25.554093 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:25 crc kubenswrapper[4728]: E0204 11:29:25.554235 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:27 crc kubenswrapper[4728]: I0204 11:29:27.553398 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:27 crc kubenswrapper[4728]: E0204 11:29:27.554176 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:27 crc kubenswrapper[4728]: I0204 11:29:27.553485 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:27 crc kubenswrapper[4728]: E0204 11:29:27.554474 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:27 crc kubenswrapper[4728]: I0204 11:29:27.553586 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:27 crc kubenswrapper[4728]: E0204 11:29:27.554780 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:27 crc kubenswrapper[4728]: I0204 11:29:27.553449 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:27 crc kubenswrapper[4728]: E0204 11:29:27.555061 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:29 crc kubenswrapper[4728]: I0204 11:29:29.552986 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:29 crc kubenswrapper[4728]: E0204 11:29:29.553113 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:29 crc kubenswrapper[4728]: I0204 11:29:29.553121 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:29 crc kubenswrapper[4728]: I0204 11:29:29.553268 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:29 crc kubenswrapper[4728]: E0204 11:29:29.553376 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:29 crc kubenswrapper[4728]: E0204 11:29:29.553528 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:29 crc kubenswrapper[4728]: I0204 11:29:29.553873 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:29 crc kubenswrapper[4728]: E0204 11:29:29.553981 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:31 crc kubenswrapper[4728]: E0204 11:29:31.531341 4728 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 04 11:29:31 crc kubenswrapper[4728]: I0204 11:29:31.553940 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:31 crc kubenswrapper[4728]: I0204 11:29:31.554008 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:31 crc kubenswrapper[4728]: I0204 11:29:31.554087 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:31 crc kubenswrapper[4728]: I0204 11:29:31.557198 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:31 crc kubenswrapper[4728]: E0204 11:29:31.557190 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:31 crc kubenswrapper[4728]: E0204 11:29:31.557342 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:31 crc kubenswrapper[4728]: E0204 11:29:31.557407 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:31 crc kubenswrapper[4728]: E0204 11:29:31.557525 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:31 crc kubenswrapper[4728]: E0204 11:29:31.647782 4728 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 04 11:29:31 crc kubenswrapper[4728]: I0204 11:29:31.753875 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dc6rd_3dbc56be-abfc-4180-870e-f4c19bd09f4b/kube-multus/1.log" Feb 04 11:29:31 crc kubenswrapper[4728]: I0204 11:29:31.754618 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dc6rd_3dbc56be-abfc-4180-870e-f4c19bd09f4b/kube-multus/0.log" Feb 04 11:29:31 crc kubenswrapper[4728]: I0204 11:29:31.754663 4728 generic.go:334] "Generic (PLEG): container finished" podID="3dbc56be-abfc-4180-870e-f4c19bd09f4b" containerID="67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca" exitCode=1 Feb 04 11:29:31 crc kubenswrapper[4728]: I0204 11:29:31.754692 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dc6rd" event={"ID":"3dbc56be-abfc-4180-870e-f4c19bd09f4b","Type":"ContainerDied","Data":"67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca"} Feb 04 11:29:31 crc kubenswrapper[4728]: I0204 11:29:31.754723 4728 scope.go:117] "RemoveContainer" containerID="cb8289e5e35e2c45448d4de16af43eb3ad5aac96deb3238d60333e398e6c457d" Feb 04 11:29:31 crc kubenswrapper[4728]: I0204 11:29:31.755200 4728 scope.go:117] "RemoveContainer" containerID="67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca" Feb 04 11:29:31 crc kubenswrapper[4728]: E0204 11:29:31.755443 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dc6rd_openshift-multus(3dbc56be-abfc-4180-870e-f4c19bd09f4b)\"" pod="openshift-multus/multus-dc6rd" podUID="3dbc56be-abfc-4180-870e-f4c19bd09f4b" Feb 04 11:29:32 crc kubenswrapper[4728]: I0204 11:29:32.554049 4728 scope.go:117] "RemoveContainer" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" Feb 04 11:29:32 crc kubenswrapper[4728]: I0204 11:29:32.758704 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dc6rd_3dbc56be-abfc-4180-870e-f4c19bd09f4b/kube-multus/1.log" Feb 04 11:29:32 crc kubenswrapper[4728]: I0204 11:29:32.760798 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/3.log" Feb 04 11:29:32 crc kubenswrapper[4728]: I0204 11:29:32.763148 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerStarted","Data":"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828"} Feb 04 11:29:32 crc kubenswrapper[4728]: I0204 11:29:32.763989 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:29:32 crc kubenswrapper[4728]: I0204 11:29:32.792001 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podStartSLOduration=95.791983631 podStartE2EDuration="1m35.791983631s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:29:32.791455879 +0000 UTC m=+121.934160314" watchObservedRunningTime="2026-02-04 11:29:32.791983631 +0000 UTC m=+121.934688016" Feb 04 11:29:33 crc kubenswrapper[4728]: I0204 11:29:33.439708 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q6m9t"] Feb 04 11:29:33 crc kubenswrapper[4728]: I0204 11:29:33.439845 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:33 crc kubenswrapper[4728]: E0204 11:29:33.439966 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:33 crc kubenswrapper[4728]: I0204 11:29:33.553332 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:33 crc kubenswrapper[4728]: I0204 11:29:33.553419 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:33 crc kubenswrapper[4728]: E0204 11:29:33.553542 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:33 crc kubenswrapper[4728]: E0204 11:29:33.553703 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:33 crc kubenswrapper[4728]: I0204 11:29:33.553362 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:33 crc kubenswrapper[4728]: E0204 11:29:33.553884 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:35 crc kubenswrapper[4728]: I0204 11:29:35.553342 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:35 crc kubenswrapper[4728]: I0204 11:29:35.553402 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:35 crc kubenswrapper[4728]: E0204 11:29:35.553534 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:35 crc kubenswrapper[4728]: I0204 11:29:35.553554 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:35 crc kubenswrapper[4728]: I0204 11:29:35.553577 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:35 crc kubenswrapper[4728]: E0204 11:29:35.553737 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:35 crc kubenswrapper[4728]: E0204 11:29:35.553855 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:35 crc kubenswrapper[4728]: E0204 11:29:35.553964 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:36 crc kubenswrapper[4728]: E0204 11:29:36.649839 4728 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 04 11:29:37 crc kubenswrapper[4728]: I0204 11:29:37.553894 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:37 crc kubenswrapper[4728]: I0204 11:29:37.553927 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:37 crc kubenswrapper[4728]: E0204 11:29:37.554120 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:37 crc kubenswrapper[4728]: I0204 11:29:37.554189 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:37 crc kubenswrapper[4728]: I0204 11:29:37.553945 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:37 crc kubenswrapper[4728]: E0204 11:29:37.554287 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:37 crc kubenswrapper[4728]: E0204 11:29:37.554386 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:37 crc kubenswrapper[4728]: E0204 11:29:37.554715 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:37 crc kubenswrapper[4728]: I0204 11:29:37.759931 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:29:39 crc kubenswrapper[4728]: I0204 11:29:39.553576 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:39 crc kubenswrapper[4728]: I0204 11:29:39.553878 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:39 crc kubenswrapper[4728]: E0204 11:29:39.554128 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:39 crc kubenswrapper[4728]: I0204 11:29:39.553925 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:39 crc kubenswrapper[4728]: I0204 11:29:39.553886 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:39 crc kubenswrapper[4728]: E0204 11:29:39.554366 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:39 crc kubenswrapper[4728]: E0204 11:29:39.554474 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:39 crc kubenswrapper[4728]: E0204 11:29:39.554650 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:41 crc kubenswrapper[4728]: I0204 11:29:41.552633 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:41 crc kubenswrapper[4728]: I0204 11:29:41.552672 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:41 crc kubenswrapper[4728]: I0204 11:29:41.552667 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:41 crc kubenswrapper[4728]: E0204 11:29:41.554479 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:41 crc kubenswrapper[4728]: I0204 11:29:41.554628 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:41 crc kubenswrapper[4728]: E0204 11:29:41.554676 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:41 crc kubenswrapper[4728]: E0204 11:29:41.554845 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:41 crc kubenswrapper[4728]: E0204 11:29:41.554920 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:41 crc kubenswrapper[4728]: E0204 11:29:41.650462 4728 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 04 11:29:43 crc kubenswrapper[4728]: I0204 11:29:43.553562 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:43 crc kubenswrapper[4728]: I0204 11:29:43.553636 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:43 crc kubenswrapper[4728]: E0204 11:29:43.553855 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:43 crc kubenswrapper[4728]: I0204 11:29:43.553901 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:43 crc kubenswrapper[4728]: I0204 11:29:43.553875 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:43 crc kubenswrapper[4728]: E0204 11:29:43.554022 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:43 crc kubenswrapper[4728]: E0204 11:29:43.554178 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:43 crc kubenswrapper[4728]: E0204 11:29:43.554353 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:45 crc kubenswrapper[4728]: I0204 11:29:45.553697 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:45 crc kubenswrapper[4728]: I0204 11:29:45.553838 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:45 crc kubenswrapper[4728]: E0204 11:29:45.553891 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:45 crc kubenswrapper[4728]: I0204 11:29:45.553910 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:45 crc kubenswrapper[4728]: I0204 11:29:45.553949 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:45 crc kubenswrapper[4728]: E0204 11:29:45.554102 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:45 crc kubenswrapper[4728]: E0204 11:29:45.554143 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:45 crc kubenswrapper[4728]: I0204 11:29:45.554492 4728 scope.go:117] "RemoveContainer" containerID="67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca" Feb 04 11:29:45 crc kubenswrapper[4728]: E0204 11:29:45.554545 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:45 crc kubenswrapper[4728]: I0204 11:29:45.805976 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dc6rd_3dbc56be-abfc-4180-870e-f4c19bd09f4b/kube-multus/1.log" Feb 04 11:29:45 crc kubenswrapper[4728]: I0204 11:29:45.806061 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dc6rd" event={"ID":"3dbc56be-abfc-4180-870e-f4c19bd09f4b","Type":"ContainerStarted","Data":"c5303ece67988b48c9f7078f4f5f783e2dfa7759b80454d2a50b80d956debf57"} Feb 04 11:29:46 crc kubenswrapper[4728]: E0204 11:29:46.652227 4728 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 04 11:29:47 crc kubenswrapper[4728]: I0204 11:29:47.553683 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:47 crc kubenswrapper[4728]: E0204 11:29:47.553918 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:47 crc kubenswrapper[4728]: I0204 11:29:47.553872 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:47 crc kubenswrapper[4728]: E0204 11:29:47.554195 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:47 crc kubenswrapper[4728]: I0204 11:29:47.554328 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:47 crc kubenswrapper[4728]: E0204 11:29:47.554404 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:47 crc kubenswrapper[4728]: I0204 11:29:47.554960 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:47 crc kubenswrapper[4728]: E0204 11:29:47.555242 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:49 crc kubenswrapper[4728]: I0204 11:29:49.553278 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:49 crc kubenswrapper[4728]: E0204 11:29:49.553852 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:49 crc kubenswrapper[4728]: I0204 11:29:49.553977 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:49 crc kubenswrapper[4728]: E0204 11:29:49.554113 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:49 crc kubenswrapper[4728]: I0204 11:29:49.554161 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:49 crc kubenswrapper[4728]: I0204 11:29:49.554186 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:49 crc kubenswrapper[4728]: E0204 11:29:49.554232 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:49 crc kubenswrapper[4728]: E0204 11:29:49.554274 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:51 crc kubenswrapper[4728]: I0204 11:29:51.553545 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:51 crc kubenswrapper[4728]: I0204 11:29:51.554479 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:51 crc kubenswrapper[4728]: E0204 11:29:51.554827 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 04 11:29:51 crc kubenswrapper[4728]: I0204 11:29:51.555095 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:51 crc kubenswrapper[4728]: I0204 11:29:51.555195 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:51 crc kubenswrapper[4728]: E0204 11:29:51.555570 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 04 11:29:51 crc kubenswrapper[4728]: E0204 11:29:51.555854 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q6m9t" podUID="8fd2519d-be03-457c-b9d6-70862115f6a9" Feb 04 11:29:51 crc kubenswrapper[4728]: E0204 11:29:51.556029 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 04 11:29:53 crc kubenswrapper[4728]: I0204 11:29:53.553478 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:53 crc kubenswrapper[4728]: I0204 11:29:53.553551 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:53 crc kubenswrapper[4728]: I0204 11:29:53.553496 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:29:53 crc kubenswrapper[4728]: I0204 11:29:53.554266 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:53 crc kubenswrapper[4728]: I0204 11:29:53.556100 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 04 11:29:53 crc kubenswrapper[4728]: I0204 11:29:53.556292 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 04 11:29:53 crc kubenswrapper[4728]: I0204 11:29:53.556532 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 04 11:29:53 crc kubenswrapper[4728]: I0204 11:29:53.556540 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 04 11:29:53 crc kubenswrapper[4728]: I0204 11:29:53.556675 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 04 11:29:53 crc kubenswrapper[4728]: I0204 11:29:53.556693 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.441023 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:29:59 crc kubenswrapper[4728]: E0204 11:29:59.441266 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:32:01.441222419 +0000 UTC m=+270.583926804 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.441880 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.442001 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.442076 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.443346 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.452940 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.457944 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.543780 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.549482 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.572880 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.578926 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.597897 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 04 11:29:59 crc kubenswrapper[4728]: W0204 11:29:59.782422 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-2a5a7ed4a2d9792656990abfc1ee4a9cd01ff1d19cbb1b426ef0a80eb868e95d WatchSource:0}: Error finding container 2a5a7ed4a2d9792656990abfc1ee4a9cd01ff1d19cbb1b426ef0a80eb868e95d: Status 404 returned error can't find the container with id 2a5a7ed4a2d9792656990abfc1ee4a9cd01ff1d19cbb1b426ef0a80eb868e95d Feb 04 11:29:59 crc kubenswrapper[4728]: I0204 11:29:59.850074 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2a5a7ed4a2d9792656990abfc1ee4a9cd01ff1d19cbb1b426ef0a80eb868e95d"} Feb 04 11:30:00 crc kubenswrapper[4728]: W0204 11:30:00.009246 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-c530e6c8f201d9fd98569651ae6c6231dc5490177c503c909517a3218fe28b0d WatchSource:0}: Error finding container c530e6c8f201d9fd98569651ae6c6231dc5490177c503c909517a3218fe28b0d: Status 404 returned error can't find the container with id c530e6c8f201d9fd98569651ae6c6231dc5490177c503c909517a3218fe28b0d Feb 04 11:30:00 crc kubenswrapper[4728]: W0204 11:30:00.014932 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-3d8e2d48a084a5c13d1d22e4267f87f27df5ce7fdd52ee76847c6c5e5065d16c WatchSource:0}: Error finding container 3d8e2d48a084a5c13d1d22e4267f87f27df5ce7fdd52ee76847c6c5e5065d16c: Status 404 returned error can't find the container with id 3d8e2d48a084a5c13d1d22e4267f87f27df5ce7fdd52ee76847c6c5e5065d16c Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.227328 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.268460 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r576m"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.268957 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.271935 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.272880 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6dpmf"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.273296 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.273849 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.274275 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.276862 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.276878 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.281873 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tlnkw"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.283273 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.293358 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6hr78"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.293954 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.294159 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.294644 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.294797 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.294827 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.294827 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.295147 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.296346 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.296500 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.296621 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.296675 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.296742 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.296874 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.297016 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.297046 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.297123 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.300422 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.300638 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.302265 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.303018 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.303071 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.303718 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.303887 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.303913 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.304020 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.304053 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-csv4c"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.304100 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.304170 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.304480 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.304919 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.305103 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.305273 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.305883 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.305993 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cmjx5"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.306410 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.306939 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.307028 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.307134 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.307194 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.307241 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.307605 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.307907 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-c4ckr"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.308192 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.309640 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-l4qn4"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.310183 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l4qn4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.315662 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.316092 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-skj7q"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.316463 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.316803 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.316871 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.317334 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.317355 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.317466 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.317490 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.317632 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.317731 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.318015 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.319793 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57d49"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.320119 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.320662 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.320934 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.321955 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.322173 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.322357 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.322993 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.323301 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.324602 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.325161 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.333237 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.333973 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.334304 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.334540 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.334904 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.335111 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.335261 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.335367 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.335421 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.335532 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.335566 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.348179 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.349872 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.350652 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.355058 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.355140 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.355178 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.355281 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zm7m8"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.355294 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.355377 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.355393 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.355468 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.355525 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.355643 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.355879 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.356024 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.356072 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.356565 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.356596 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0352f14b-41bc-4c68-961b-51b6a4cc7a53-service-ca-bundle\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.356731 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0352f14b-41bc-4c68-961b-51b6a4cc7a53-serving-cert\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.356789 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0352f14b-41bc-4c68-961b-51b6a4cc7a53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.356991 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.357107 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrw7\" (UniqueName: \"kubernetes.io/projected/0352f14b-41bc-4c68-961b-51b6a4cc7a53-kube-api-access-pnrw7\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.357240 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.357496 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4k2zf"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.357796 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0352f14b-41bc-4c68-961b-51b6a4cc7a53-config\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.357990 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.357996 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.358128 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.358196 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.358286 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.358612 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.358614 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.359570 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.360239 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.361023 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.361689 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.365550 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.366371 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.366581 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.366728 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.366910 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.367111 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.367329 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.367625 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.367771 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.367818 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.367916 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.367978 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.368022 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.368086 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.368131 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.368193 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.368231 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.368341 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.368444 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.368454 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.367770 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.370475 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.370500 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.376858 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.376889 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.377636 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.378023 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.380019 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.388214 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.388478 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.390285 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.392455 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.395978 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.401904 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.404916 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.405705 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s6rlf"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.407372 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.439198 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.449383 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.493416 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.493695 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.493924 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.494292 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.494650 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bnrcg"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.495011 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.495376 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.495794 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.496411 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.496632 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.496673 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.496808 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.496958 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497121 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497237 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4f6aa38-43e0-4d04-a3a9-12b046d30937-available-featuregates\") pod \"openshift-config-operator-7777fb866f-skj7q\" (UID: \"f4f6aa38-43e0-4d04-a3a9-12b046d30937\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497280 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/57ce51dd-e252-4911-aee9-d4755db74869-images\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497308 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ccba369f-e378-4f2d-b733-f658edbd6c99-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497338 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497341 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8fdf4-3a93-4957-ad97-a1a376d821cd-serving-cert\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497609 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e8cbf19-0e6b-43ce-996c-11b1776e6eae-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rg87q\" (UID: \"7e8cbf19-0e6b-43ce-996c-11b1776e6eae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497636 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89739118-42f8-49bd-a5bf-f5f04e612dab-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nd75l\" (UID: \"89739118-42f8-49bd-a5bf-f5f04e612dab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497663 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5l7q\" (UniqueName: \"kubernetes.io/projected/33b2bdb9-5bcb-4977-8722-3d2fa6f8e291-kube-api-access-b5l7q\") pod \"cluster-samples-operator-665b6dd947-27mqj\" (UID: \"33b2bdb9-5bcb-4977-8722-3d2fa6f8e291\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497688 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-service-ca\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497716 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/031aab71-b01b-4173-a760-dc26e36374ae-auth-proxy-config\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497738 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-audit\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497782 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a16ecbae-a304-444d-b36c-c3e82a1332a1-serving-cert\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497845 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a16ecbae-a304-444d-b36c-c3e82a1332a1-encryption-config\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497874 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497911 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrw7\" (UniqueName: \"kubernetes.io/projected/0352f14b-41bc-4c68-961b-51b6a4cc7a53-kube-api-access-pnrw7\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497941 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzfnt\" (UniqueName: \"kubernetes.io/projected/1f802986-f97c-4813-9aec-d48d43eeedae-kube-api-access-xzfnt\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.497977 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9s5f\" (UniqueName: \"kubernetes.io/projected/a16ecbae-a304-444d-b36c-c3e82a1332a1-kube-api-access-g9s5f\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498002 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhc4b\" (UniqueName: \"kubernetes.io/projected/89739118-42f8-49bd-a5bf-f5f04e612dab-kube-api-access-qhc4b\") pod \"openshift-apiserver-operator-796bbdcf4f-nd75l\" (UID: \"89739118-42f8-49bd-a5bf-f5f04e612dab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498028 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498063 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-client-ca\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498087 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498110 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/588eb6e9-6d28-438d-b881-ab944960aa79-trusted-ca\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498128 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-image-import-ca\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498146 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a16ecbae-a304-444d-b36c-c3e82a1332a1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498164 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-config\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498181 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kzz5\" (UniqueName: \"kubernetes.io/projected/15f8fdf4-3a93-4957-ad97-a1a376d821cd-kube-api-access-8kzz5\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498201 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498267 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zbp\" (UniqueName: \"kubernetes.io/projected/3269cf72-ed95-40a4-84d6-74e53ea1c850-kube-api-access-22zbp\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfnww\" (UID: \"3269cf72-ed95-40a4-84d6-74e53ea1c850\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498303 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0352f14b-41bc-4c68-961b-51b6a4cc7a53-config\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498330 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498351 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77hxv\" (UniqueName: \"kubernetes.io/projected/87039d42-443e-40f7-abe1-a6462556cc32-kube-api-access-77hxv\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498380 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-etcd-serving-ca\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498404 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4e9bf47-202b-4206-8758-a446e86d7a6b-encryption-config\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498427 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-oauth-config\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498451 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/033a175a-69ae-431f-8803-b2f5db11ee91-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qrbcq\" (UID: \"033a175a-69ae-431f-8803-b2f5db11ee91\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498518 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a16ecbae-a304-444d-b36c-c3e82a1332a1-audit-policies\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498544 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a16ecbae-a304-444d-b36c-c3e82a1332a1-audit-dir\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498572 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-audit-policies\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498606 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498634 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-oauth-serving-cert\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498660 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031aab71-b01b-4173-a760-dc26e36374ae-config\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498692 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbw5z\" (UniqueName: \"kubernetes.io/projected/56aef793-e703-49e3-b6c5-b07e9610b661-kube-api-access-mbw5z\") pod \"service-ca-operator-777779d784-pp8c9\" (UID: \"56aef793-e703-49e3-b6c5-b07e9610b661\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498715 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4e9bf47-202b-4206-8758-a446e86d7a6b-etcd-client\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498739 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw56n\" (UniqueName: \"kubernetes.io/projected/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-kube-api-access-nw56n\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498790 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pkwr\" (UniqueName: \"kubernetes.io/projected/20be2be6-dc74-4404-b883-1ad4af94512b-kube-api-access-7pkwr\") pod \"downloads-7954f5f757-l4qn4\" (UID: \"20be2be6-dc74-4404-b883-1ad4af94512b\") " pod="openshift-console/downloads-7954f5f757-l4qn4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498816 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv5xq\" (UniqueName: \"kubernetes.io/projected/ccba369f-e378-4f2d-b733-f658edbd6c99-kube-api-access-dv5xq\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.498983 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f802986-f97c-4813-9aec-d48d43eeedae-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499013 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499038 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57ce51dd-e252-4911-aee9-d4755db74869-proxy-tls\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499101 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ccba369f-e378-4f2d-b733-f658edbd6c99-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499244 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aef793-e703-49e3-b6c5-b07e9610b661-config\") pod \"service-ca-operator-777779d784-pp8c9\" (UID: \"56aef793-e703-49e3-b6c5-b07e9610b661\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499275 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3269cf72-ed95-40a4-84d6-74e53ea1c850-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfnww\" (UID: \"3269cf72-ed95-40a4-84d6-74e53ea1c850\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499347 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f802986-f97c-4813-9aec-d48d43eeedae-config\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499379 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0352f14b-41bc-4c68-961b-51b6a4cc7a53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499400 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0352f14b-41bc-4c68-961b-51b6a4cc7a53-serving-cert\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499471 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aef793-e703-49e3-b6c5-b07e9610b661-serving-cert\") pod \"service-ca-operator-777779d784-pp8c9\" (UID: \"56aef793-e703-49e3-b6c5-b07e9610b661\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499551 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0352f14b-41bc-4c68-961b-51b6a4cc7a53-config\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499709 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm654\" (UniqueName: \"kubernetes.io/projected/031aab71-b01b-4173-a760-dc26e36374ae-kube-api-access-jm654\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499816 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-client-ca\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499909 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87039d42-443e-40f7-abe1-a6462556cc32-audit-dir\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.499996 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwfgz\" (UniqueName: \"kubernetes.io/projected/57ce51dd-e252-4911-aee9-d4755db74869-kube-api-access-zwfgz\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500085 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j8wm\" (UniqueName: \"kubernetes.io/projected/033a175a-69ae-431f-8803-b2f5db11ee91-kube-api-access-2j8wm\") pod \"openshift-controller-manager-operator-756b6f6bc6-qrbcq\" (UID: \"033a175a-69ae-431f-8803-b2f5db11ee91\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500148 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500251 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0352f14b-41bc-4c68-961b-51b6a4cc7a53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500263 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500359 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4e9bf47-202b-4206-8758-a446e86d7a6b-serving-cert\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500429 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500429 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500538 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qflqx\" (UniqueName: \"kubernetes.io/projected/f4f6aa38-43e0-4d04-a3a9-12b046d30937-kube-api-access-qflqx\") pod \"openshift-config-operator-7777fb866f-skj7q\" (UID: \"f4f6aa38-43e0-4d04-a3a9-12b046d30937\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500574 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4e9bf47-202b-4206-8758-a446e86d7a6b-audit-dir\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500591 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f6aa38-43e0-4d04-a3a9-12b046d30937-serving-cert\") pod \"openshift-config-operator-7777fb866f-skj7q\" (UID: \"f4f6aa38-43e0-4d04-a3a9-12b046d30937\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500612 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/031aab71-b01b-4173-a760-dc26e36374ae-machine-approver-tls\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500894 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/588eb6e9-6d28-438d-b881-ab944960aa79-serving-cert\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500944 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/33b2bdb9-5bcb-4977-8722-3d2fa6f8e291-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-27mqj\" (UID: \"33b2bdb9-5bcb-4977-8722-3d2fa6f8e291\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.500992 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-config\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501016 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501047 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0352f14b-41bc-4c68-961b-51b6a4cc7a53-service-ca-bundle\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501075 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-config\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501099 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8cbf19-0e6b-43ce-996c-11b1776e6eae-config\") pod \"kube-apiserver-operator-766d6c64bb-rg87q\" (UID: \"7e8cbf19-0e6b-43ce-996c-11b1776e6eae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501123 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a16ecbae-a304-444d-b36c-c3e82a1332a1-etcd-client\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501143 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501164 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e8cbf19-0e6b-43ce-996c-11b1776e6eae-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rg87q\" (UID: \"7e8cbf19-0e6b-43ce-996c-11b1776e6eae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501181 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501199 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a16ecbae-a304-444d-b36c-c3e82a1332a1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501218 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501240 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4e9bf47-202b-4206-8758-a446e86d7a6b-node-pullsecrets\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501258 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3269cf72-ed95-40a4-84d6-74e53ea1c850-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfnww\" (UID: \"3269cf72-ed95-40a4-84d6-74e53ea1c850\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501278 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-config\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501294 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-trusted-ca-bundle\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501328 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb06213-4bce-43d5-b16f-0bc09dc118fe-serving-cert\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501343 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501360 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccba369f-e378-4f2d-b733-f658edbd6c99-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501378 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89739118-42f8-49bd-a5bf-f5f04e612dab-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nd75l\" (UID: \"89739118-42f8-49bd-a5bf-f5f04e612dab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501391 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-serving-cert\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501409 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtsh2\" (UniqueName: \"kubernetes.io/projected/bdb06213-4bce-43d5-b16f-0bc09dc118fe-kube-api-access-dtsh2\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501423 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/588eb6e9-6d28-438d-b881-ab944960aa79-config\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501439 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpqlp\" (UniqueName: \"kubernetes.io/projected/588eb6e9-6d28-438d-b881-ab944960aa79-kube-api-access-cpqlp\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501453 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1f802986-f97c-4813-9aec-d48d43eeedae-images\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501468 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57ce51dd-e252-4911-aee9-d4755db74869-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501484 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s8kb\" (UniqueName: \"kubernetes.io/projected/a4e9bf47-202b-4206-8758-a446e86d7a6b-kube-api-access-6s8kb\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501501 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/033a175a-69ae-431f-8803-b2f5db11ee91-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qrbcq\" (UID: \"033a175a-69ae-431f-8803-b2f5db11ee91\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.501607 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0352f14b-41bc-4c68-961b-51b6a4cc7a53-service-ca-bundle\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.502145 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qdvrf"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.503328 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.510117 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0352f14b-41bc-4c68-961b-51b6a4cc7a53-serving-cert\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.511620 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.515252 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.515772 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6dpmf"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.517509 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r576m"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.518611 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.520057 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6hr78"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.521720 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x6j2r"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.523091 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.534688 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.537657 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.537705 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.539520 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cmjx5"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.542913 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2v2s5"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.551476 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.551853 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.553272 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-skj7q"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.553355 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-csv4c"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.553374 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tlnkw"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.553388 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.553402 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.553415 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.553819 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57d49"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.554979 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.556116 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l4qn4"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.557828 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.558491 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.559860 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c4ckr"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.561093 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.562251 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.563405 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.564472 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bnrcg"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.565852 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.567441 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2v2s5"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.568306 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.569286 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.570352 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.570526 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.571551 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s6rlf"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.572641 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x6j2r"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.573778 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4k2zf"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.575014 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.575940 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-njv76"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.577034 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-zrwsp"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.577232 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-njv76" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.577506 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zrwsp" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.578391 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.579510 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qdvrf"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.580728 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-njv76"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.581720 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.589839 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5695c"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.591191 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5695c"] Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.591294 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5695c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.592861 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602187 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a16ecbae-a304-444d-b36c-c3e82a1332a1-audit-policies\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602235 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a16ecbae-a304-444d-b36c-c3e82a1332a1-audit-dir\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602261 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-audit-policies\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602280 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-oauth-serving-cert\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602302 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602325 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031aab71-b01b-4173-a760-dc26e36374ae-config\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602348 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbw5z\" (UniqueName: \"kubernetes.io/projected/56aef793-e703-49e3-b6c5-b07e9610b661-kube-api-access-mbw5z\") pod \"service-ca-operator-777779d784-pp8c9\" (UID: \"56aef793-e703-49e3-b6c5-b07e9610b661\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602372 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4e9bf47-202b-4206-8758-a446e86d7a6b-etcd-client\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602390 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw56n\" (UniqueName: \"kubernetes.io/projected/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-kube-api-access-nw56n\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602410 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pkwr\" (UniqueName: \"kubernetes.io/projected/20be2be6-dc74-4404-b883-1ad4af94512b-kube-api-access-7pkwr\") pod \"downloads-7954f5f757-l4qn4\" (UID: \"20be2be6-dc74-4404-b883-1ad4af94512b\") " pod="openshift-console/downloads-7954f5f757-l4qn4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602463 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv5xq\" (UniqueName: \"kubernetes.io/projected/ccba369f-e378-4f2d-b733-f658edbd6c99-kube-api-access-dv5xq\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602487 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f802986-f97c-4813-9aec-d48d43eeedae-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602509 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602530 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57ce51dd-e252-4911-aee9-d4755db74869-proxy-tls\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602551 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ccba369f-e378-4f2d-b733-f658edbd6c99-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602572 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aef793-e703-49e3-b6c5-b07e9610b661-config\") pod \"service-ca-operator-777779d784-pp8c9\" (UID: \"56aef793-e703-49e3-b6c5-b07e9610b661\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602594 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3269cf72-ed95-40a4-84d6-74e53ea1c850-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfnww\" (UID: \"3269cf72-ed95-40a4-84d6-74e53ea1c850\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602617 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f802986-f97c-4813-9aec-d48d43eeedae-config\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602640 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aef793-e703-49e3-b6c5-b07e9610b661-serving-cert\") pod \"service-ca-operator-777779d784-pp8c9\" (UID: \"56aef793-e703-49e3-b6c5-b07e9610b661\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602665 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j8wm\" (UniqueName: \"kubernetes.io/projected/033a175a-69ae-431f-8803-b2f5db11ee91-kube-api-access-2j8wm\") pod \"openshift-controller-manager-operator-756b6f6bc6-qrbcq\" (UID: \"033a175a-69ae-431f-8803-b2f5db11ee91\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602688 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm654\" (UniqueName: \"kubernetes.io/projected/031aab71-b01b-4173-a760-dc26e36374ae-kube-api-access-jm654\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602711 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-client-ca\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602737 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87039d42-443e-40f7-abe1-a6462556cc32-audit-dir\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602778 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwfgz\" (UniqueName: \"kubernetes.io/projected/57ce51dd-e252-4911-aee9-d4755db74869-kube-api-access-zwfgz\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602803 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4e9bf47-202b-4206-8758-a446e86d7a6b-serving-cert\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602829 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602853 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qflqx\" (UniqueName: \"kubernetes.io/projected/f4f6aa38-43e0-4d04-a3a9-12b046d30937-kube-api-access-qflqx\") pod \"openshift-config-operator-7777fb866f-skj7q\" (UID: \"f4f6aa38-43e0-4d04-a3a9-12b046d30937\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602894 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4e9bf47-202b-4206-8758-a446e86d7a6b-audit-dir\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602916 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f6aa38-43e0-4d04-a3a9-12b046d30937-serving-cert\") pod \"openshift-config-operator-7777fb866f-skj7q\" (UID: \"f4f6aa38-43e0-4d04-a3a9-12b046d30937\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602945 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/33b2bdb9-5bcb-4977-8722-3d2fa6f8e291-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-27mqj\" (UID: \"33b2bdb9-5bcb-4977-8722-3d2fa6f8e291\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602970 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/031aab71-b01b-4173-a760-dc26e36374ae-machine-approver-tls\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.602995 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/588eb6e9-6d28-438d-b881-ab944960aa79-serving-cert\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603020 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-config\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603048 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603076 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603102 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-config\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603125 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8cbf19-0e6b-43ce-996c-11b1776e6eae-config\") pod \"kube-apiserver-operator-766d6c64bb-rg87q\" (UID: \"7e8cbf19-0e6b-43ce-996c-11b1776e6eae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603146 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a16ecbae-a304-444d-b36c-c3e82a1332a1-etcd-client\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603170 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603193 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e8cbf19-0e6b-43ce-996c-11b1776e6eae-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rg87q\" (UID: \"7e8cbf19-0e6b-43ce-996c-11b1776e6eae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603218 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603241 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a16ecbae-a304-444d-b36c-c3e82a1332a1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603265 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-config\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603288 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-trusted-ca-bundle\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603312 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4e9bf47-202b-4206-8758-a446e86d7a6b-node-pullsecrets\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603336 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3269cf72-ed95-40a4-84d6-74e53ea1c850-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfnww\" (UID: \"3269cf72-ed95-40a4-84d6-74e53ea1c850\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603359 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603382 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccba369f-e378-4f2d-b733-f658edbd6c99-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603417 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb06213-4bce-43d5-b16f-0bc09dc118fe-serving-cert\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.603446 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89739118-42f8-49bd-a5bf-f5f04e612dab-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nd75l\" (UID: \"89739118-42f8-49bd-a5bf-f5f04e612dab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604085 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-serving-cert\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604121 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57ce51dd-e252-4911-aee9-d4755db74869-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604145 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtsh2\" (UniqueName: \"kubernetes.io/projected/bdb06213-4bce-43d5-b16f-0bc09dc118fe-kube-api-access-dtsh2\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604169 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/588eb6e9-6d28-438d-b881-ab944960aa79-config\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604193 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpqlp\" (UniqueName: \"kubernetes.io/projected/588eb6e9-6d28-438d-b881-ab944960aa79-kube-api-access-cpqlp\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604217 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1f802986-f97c-4813-9aec-d48d43eeedae-images\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604252 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s8kb\" (UniqueName: \"kubernetes.io/projected/a4e9bf47-202b-4206-8758-a446e86d7a6b-kube-api-access-6s8kb\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604276 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/033a175a-69ae-431f-8803-b2f5db11ee91-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qrbcq\" (UID: \"033a175a-69ae-431f-8803-b2f5db11ee91\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604301 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4f6aa38-43e0-4d04-a3a9-12b046d30937-available-featuregates\") pod \"openshift-config-operator-7777fb866f-skj7q\" (UID: \"f4f6aa38-43e0-4d04-a3a9-12b046d30937\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604323 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/57ce51dd-e252-4911-aee9-d4755db74869-images\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604344 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ccba369f-e378-4f2d-b733-f658edbd6c99-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604368 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e8cbf19-0e6b-43ce-996c-11b1776e6eae-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rg87q\" (UID: \"7e8cbf19-0e6b-43ce-996c-11b1776e6eae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604392 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89739118-42f8-49bd-a5bf-f5f04e612dab-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nd75l\" (UID: \"89739118-42f8-49bd-a5bf-f5f04e612dab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604418 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8fdf4-3a93-4957-ad97-a1a376d821cd-serving-cert\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604440 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5l7q\" (UniqueName: \"kubernetes.io/projected/33b2bdb9-5bcb-4977-8722-3d2fa6f8e291-kube-api-access-b5l7q\") pod \"cluster-samples-operator-665b6dd947-27mqj\" (UID: \"33b2bdb9-5bcb-4977-8722-3d2fa6f8e291\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604465 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-service-ca\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604492 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604518 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/031aab71-b01b-4173-a760-dc26e36374ae-auth-proxy-config\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604539 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-audit\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604648 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a16ecbae-a304-444d-b36c-c3e82a1332a1-serving-cert\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604676 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a16ecbae-a304-444d-b36c-c3e82a1332a1-encryption-config\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604701 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604736 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzfnt\" (UniqueName: \"kubernetes.io/projected/1f802986-f97c-4813-9aec-d48d43eeedae-kube-api-access-xzfnt\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604803 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9s5f\" (UniqueName: \"kubernetes.io/projected/a16ecbae-a304-444d-b36c-c3e82a1332a1-kube-api-access-g9s5f\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604829 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhc4b\" (UniqueName: \"kubernetes.io/projected/89739118-42f8-49bd-a5bf-f5f04e612dab-kube-api-access-qhc4b\") pod \"openshift-apiserver-operator-796bbdcf4f-nd75l\" (UID: \"89739118-42f8-49bd-a5bf-f5f04e612dab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604852 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-image-import-ca\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604872 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a16ecbae-a304-444d-b36c-c3e82a1332a1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604898 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-client-ca\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604920 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604942 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/588eb6e9-6d28-438d-b881-ab944960aa79-trusted-ca\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.604966 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-config\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.605005 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kzz5\" (UniqueName: \"kubernetes.io/projected/15f8fdf4-3a93-4957-ad97-a1a376d821cd-kube-api-access-8kzz5\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.605031 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.605056 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zbp\" (UniqueName: \"kubernetes.io/projected/3269cf72-ed95-40a4-84d6-74e53ea1c850-kube-api-access-22zbp\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfnww\" (UID: \"3269cf72-ed95-40a4-84d6-74e53ea1c850\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.605086 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.605109 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77hxv\" (UniqueName: \"kubernetes.io/projected/87039d42-443e-40f7-abe1-a6462556cc32-kube-api-access-77hxv\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.605136 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/033a175a-69ae-431f-8803-b2f5db11ee91-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qrbcq\" (UID: \"033a175a-69ae-431f-8803-b2f5db11ee91\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.605160 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-etcd-serving-ca\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.605182 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4e9bf47-202b-4206-8758-a446e86d7a6b-encryption-config\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.605204 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-oauth-config\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.605563 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.606192 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031aab71-b01b-4173-a760-dc26e36374ae-config\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.606377 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3269cf72-ed95-40a4-84d6-74e53ea1c850-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfnww\" (UID: \"3269cf72-ed95-40a4-84d6-74e53ea1c850\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.606402 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4e9bf47-202b-4206-8758-a446e86d7a6b-audit-dir\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.607016 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-client-ca\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.607079 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87039d42-443e-40f7-abe1-a6462556cc32-audit-dir\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.607098 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a16ecbae-a304-444d-b36c-c3e82a1332a1-audit-policies\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.607183 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a16ecbae-a304-444d-b36c-c3e82a1332a1-audit-dir\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.607232 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f802986-f97c-4813-9aec-d48d43eeedae-config\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.608424 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-oauth-serving-cert\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.608730 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4e9bf47-202b-4206-8758-a446e86d7a6b-serving-cert\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.608986 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/031aab71-b01b-4173-a760-dc26e36374ae-machine-approver-tls\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.609769 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.610019 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.610342 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-config\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.610900 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-audit-policies\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.611287 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.611318 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4e9bf47-202b-4206-8758-a446e86d7a6b-etcd-client\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.611377 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a16ecbae-a304-444d-b36c-c3e82a1332a1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.611391 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a16ecbae-a304-444d-b36c-c3e82a1332a1-etcd-client\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.611741 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-config\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.611870 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-service-ca\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.611879 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57ce51dd-e252-4911-aee9-d4755db74869-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.611904 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/588eb6e9-6d28-438d-b881-ab944960aa79-config\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.612016 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.612022 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8cbf19-0e6b-43ce-996c-11b1776e6eae-config\") pod \"kube-apiserver-operator-766d6c64bb-rg87q\" (UID: \"7e8cbf19-0e6b-43ce-996c-11b1776e6eae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.612056 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.612086 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-client-ca\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.612274 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-config\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.612766 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1f802986-f97c-4813-9aec-d48d43eeedae-images\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.613128 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a16ecbae-a304-444d-b36c-c3e82a1332a1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.613532 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4f6aa38-43e0-4d04-a3a9-12b046d30937-available-featuregates\") pod \"openshift-config-operator-7777fb866f-skj7q\" (UID: \"f4f6aa38-43e0-4d04-a3a9-12b046d30937\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.613084 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4e9bf47-202b-4206-8758-a446e86d7a6b-node-pullsecrets\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.614104 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.614533 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/588eb6e9-6d28-438d-b881-ab944960aa79-trusted-ca\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.614983 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.615004 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-trusted-ca-bundle\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.615292 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ccba369f-e378-4f2d-b733-f658edbd6c99-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.615509 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/57ce51dd-e252-4911-aee9-d4755db74869-images\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.615584 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8fdf4-3a93-4957-ad97-a1a376d821cd-serving-cert\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.615614 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-config\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.615738 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-image-import-ca\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.615772 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/033a175a-69ae-431f-8803-b2f5db11ee91-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qrbcq\" (UID: \"033a175a-69ae-431f-8803-b2f5db11ee91\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.615897 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.616111 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-oauth-config\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.616156 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89739118-42f8-49bd-a5bf-f5f04e612dab-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nd75l\" (UID: \"89739118-42f8-49bd-a5bf-f5f04e612dab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.616437 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f6aa38-43e0-4d04-a3a9-12b046d30937-serving-cert\") pod \"openshift-config-operator-7777fb866f-skj7q\" (UID: \"f4f6aa38-43e0-4d04-a3a9-12b046d30937\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.616691 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.616693 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/031aab71-b01b-4173-a760-dc26e36374ae-auth-proxy-config\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.616795 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-serving-cert\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.617288 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/033a175a-69ae-431f-8803-b2f5db11ee91-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qrbcq\" (UID: \"033a175a-69ae-431f-8803-b2f5db11ee91\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.617301 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-audit\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.617414 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3269cf72-ed95-40a4-84d6-74e53ea1c850-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfnww\" (UID: \"3269cf72-ed95-40a4-84d6-74e53ea1c850\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.617587 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4e9bf47-202b-4206-8758-a446e86d7a6b-etcd-serving-ca\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.617415 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.617717 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.618437 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e8cbf19-0e6b-43ce-996c-11b1776e6eae-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rg87q\" (UID: \"7e8cbf19-0e6b-43ce-996c-11b1776e6eae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.618533 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/33b2bdb9-5bcb-4977-8722-3d2fa6f8e291-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-27mqj\" (UID: \"33b2bdb9-5bcb-4977-8722-3d2fa6f8e291\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.618800 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4e9bf47-202b-4206-8758-a446e86d7a6b-encryption-config\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.619521 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.620200 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89739118-42f8-49bd-a5bf-f5f04e612dab-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nd75l\" (UID: \"89739118-42f8-49bd-a5bf-f5f04e612dab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.620742 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb06213-4bce-43d5-b16f-0bc09dc118fe-serving-cert\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.620862 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/588eb6e9-6d28-438d-b881-ab944960aa79-serving-cert\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.620882 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.621127 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccba369f-e378-4f2d-b733-f658edbd6c99-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.621132 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f802986-f97c-4813-9aec-d48d43eeedae-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.621515 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a16ecbae-a304-444d-b36c-c3e82a1332a1-serving-cert\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.621627 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a16ecbae-a304-444d-b36c-c3e82a1332a1-encryption-config\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.632286 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.639846 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/57ce51dd-e252-4911-aee9-d4755db74869-proxy-tls\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.651158 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.671292 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.691533 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.711195 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.718540 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aef793-e703-49e3-b6c5-b07e9610b661-serving-cert\") pod \"service-ca-operator-777779d784-pp8c9\" (UID: \"56aef793-e703-49e3-b6c5-b07e9610b661\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.733092 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.751696 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.771517 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.777407 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aef793-e703-49e3-b6c5-b07e9610b661-config\") pod \"service-ca-operator-777779d784-pp8c9\" (UID: \"56aef793-e703-49e3-b6c5-b07e9610b661\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.792708 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.810906 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.832126 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.851692 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.857116 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"153b364eb702f2bfc8cdf13e609c86f64235d2672a42f0b4851b6e6dbd11731a"} Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.857199 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3d8e2d48a084a5c13d1d22e4267f87f27df5ce7fdd52ee76847c6c5e5065d16c"} Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.859260 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"63e900ac3af474ef57a23b793e82f3ddcef5156188ecc4bfdbafe8f47e82bf5b"} Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.861550 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"96d4698082709d7931be9553988baaf3d71d158532bb47cb31238c5bc9a4a0c3"} Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.861613 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c530e6c8f201d9fd98569651ae6c6231dc5490177c503c909517a3218fe28b0d"} Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.861819 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.871744 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.891898 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.911523 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.931887 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.971923 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 04 11:30:00 crc kubenswrapper[4728]: I0204 11:30:00.991092 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.013877 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.031376 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.053324 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.071595 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.092270 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.111397 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.131591 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.151501 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.171879 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.191128 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.218289 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.253263 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.272194 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.293185 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.311578 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.331998 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.351864 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.372245 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.391423 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.412027 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.432454 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.452294 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.471973 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.493706 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.510089 4728 request.go:700] Waited for 1.013059599s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0 Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.511815 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.532065 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.551403 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.571065 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.592117 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.611587 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.631391 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.650658 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.670816 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.691470 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.711172 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.731691 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.751099 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.771084 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.791203 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.810950 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.833455 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.833828 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c"] Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.834912 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.843321 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c"] Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.871313 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrw7\" (UniqueName: \"kubernetes.io/projected/0352f14b-41bc-4c68-961b-51b6a4cc7a53-kube-api-access-pnrw7\") pod \"authentication-operator-69f744f599-6dpmf\" (UID: \"0352f14b-41bc-4c68-961b-51b6a4cc7a53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.877032 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.891482 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.912042 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.931539 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.951104 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.971805 4728 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 04 11:30:01 crc kubenswrapper[4728]: I0204 11:30:01.992691 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.018598 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.031040 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.051711 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.071619 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.091869 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.111974 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.117411 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.132146 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.152228 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.171911 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.192490 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.212510 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.231160 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.251872 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.270795 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.291121 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.300478 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6dpmf"] Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.326705 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ccba369f-e378-4f2d-b733-f658edbd6c99-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.345712 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j8wm\" (UniqueName: \"kubernetes.io/projected/033a175a-69ae-431f-8803-b2f5db11ee91-kube-api-access-2j8wm\") pod \"openshift-controller-manager-operator-756b6f6bc6-qrbcq\" (UID: \"033a175a-69ae-431f-8803-b2f5db11ee91\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.367404 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm654\" (UniqueName: \"kubernetes.io/projected/031aab71-b01b-4173-a760-dc26e36374ae-kube-api-access-jm654\") pod \"machine-approver-56656f9798-dbmq2\" (UID: \"031aab71-b01b-4173-a760-dc26e36374ae\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.387175 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbw5z\" (UniqueName: \"kubernetes.io/projected/56aef793-e703-49e3-b6c5-b07e9610b661-kube-api-access-mbw5z\") pod \"service-ca-operator-777779d784-pp8c9\" (UID: \"56aef793-e703-49e3-b6c5-b07e9610b661\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.405209 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qflqx\" (UniqueName: \"kubernetes.io/projected/f4f6aa38-43e0-4d04-a3a9-12b046d30937-kube-api-access-qflqx\") pod \"openshift-config-operator-7777fb866f-skj7q\" (UID: \"f4f6aa38-43e0-4d04-a3a9-12b046d30937\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.427377 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwfgz\" (UniqueName: \"kubernetes.io/projected/57ce51dd-e252-4911-aee9-d4755db74869-kube-api-access-zwfgz\") pod \"machine-config-operator-74547568cd-w5kl7\" (UID: \"57ce51dd-e252-4911-aee9-d4755db74869\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.447522 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw56n\" (UniqueName: \"kubernetes.io/projected/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-kube-api-access-nw56n\") pod \"console-f9d7485db-c4ckr\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.467311 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pkwr\" (UniqueName: \"kubernetes.io/projected/20be2be6-dc74-4404-b883-1ad4af94512b-kube-api-access-7pkwr\") pod \"downloads-7954f5f757-l4qn4\" (UID: \"20be2be6-dc74-4404-b883-1ad4af94512b\") " pod="openshift-console/downloads-7954f5f757-l4qn4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.467509 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.482217 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.486933 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv5xq\" (UniqueName: \"kubernetes.io/projected/ccba369f-e378-4f2d-b733-f658edbd6c99-kube-api-access-dv5xq\") pod \"cluster-image-registry-operator-dc59b4c8b-8jfp4\" (UID: \"ccba369f-e378-4f2d-b733-f658edbd6c99\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.510104 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s8kb\" (UniqueName: \"kubernetes.io/projected/a4e9bf47-202b-4206-8758-a446e86d7a6b-kube-api-access-6s8kb\") pod \"apiserver-76f77b778f-6hr78\" (UID: \"a4e9bf47-202b-4206-8758-a446e86d7a6b\") " pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.510215 4728 request.go:700] Waited for 1.900203884s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.526318 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e8cbf19-0e6b-43ce-996c-11b1776e6eae-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rg87q\" (UID: \"7e8cbf19-0e6b-43ce-996c-11b1776e6eae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.555296 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpqlp\" (UniqueName: \"kubernetes.io/projected/588eb6e9-6d28-438d-b881-ab944960aa79-kube-api-access-cpqlp\") pod \"console-operator-58897d9998-csv4c\" (UID: \"588eb6e9-6d28-438d-b881-ab944960aa79\") " pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.577862 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zbp\" (UniqueName: \"kubernetes.io/projected/3269cf72-ed95-40a4-84d6-74e53ea1c850-kube-api-access-22zbp\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfnww\" (UID: \"3269cf72-ed95-40a4-84d6-74e53ea1c850\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.601117 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.603334 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5l7q\" (UniqueName: \"kubernetes.io/projected/33b2bdb9-5bcb-4977-8722-3d2fa6f8e291-kube-api-access-b5l7q\") pod \"cluster-samples-operator-665b6dd947-27mqj\" (UID: \"33b2bdb9-5bcb-4977-8722-3d2fa6f8e291\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.615266 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtsh2\" (UniqueName: \"kubernetes.io/projected/bdb06213-4bce-43d5-b16f-0bc09dc118fe-kube-api-access-dtsh2\") pod \"route-controller-manager-6576b87f9c-ww7sf\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.619328 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.637158 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kzz5\" (UniqueName: \"kubernetes.io/projected/15f8fdf4-3a93-4957-ad97-a1a376d821cd-kube-api-access-8kzz5\") pod \"controller-manager-879f6c89f-tlnkw\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.638291 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.643147 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.644887 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7"] Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.647159 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhc4b\" (UniqueName: \"kubernetes.io/projected/89739118-42f8-49bd-a5bf-f5f04e612dab-kube-api-access-qhc4b\") pod \"openshift-apiserver-operator-796bbdcf4f-nd75l\" (UID: \"89739118-42f8-49bd-a5bf-f5f04e612dab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.652260 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.662228 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.664098 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9"] Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.666415 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77hxv\" (UniqueName: \"kubernetes.io/projected/87039d42-443e-40f7-abe1-a6462556cc32-kube-api-access-77hxv\") pod \"oauth-openshift-558db77b4-cmjx5\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.671631 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:02 crc kubenswrapper[4728]: W0204 11:30:02.675687 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56aef793_e703_49e3_b6c5_b07e9610b661.slice/crio-6918ae5897b25365cd5625aa03cf46e8a47f7c885133581ca6846bc570b2ca76 WatchSource:0}: Error finding container 6918ae5897b25365cd5625aa03cf46e8a47f7c885133581ca6846bc570b2ca76: Status 404 returned error can't find the container with id 6918ae5897b25365cd5625aa03cf46e8a47f7c885133581ca6846bc570b2ca76 Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.677306 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l4qn4" Feb 04 11:30:02 crc kubenswrapper[4728]: W0204 11:30:02.680586 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031aab71_b01b_4173_a760_dc26e36374ae.slice/crio-13e4518e922737a2a9ed031ef86d5ae35a956cb85a9572838f53e1ee3e0236b1 WatchSource:0}: Error finding container 13e4518e922737a2a9ed031ef86d5ae35a956cb85a9572838f53e1ee3e0236b1: Status 404 returned error can't find the container with id 13e4518e922737a2a9ed031ef86d5ae35a956cb85a9572838f53e1ee3e0236b1 Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.687239 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.687675 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9s5f\" (UniqueName: \"kubernetes.io/projected/a16ecbae-a304-444d-b36c-c3e82a1332a1-kube-api-access-g9s5f\") pod \"apiserver-7bbb656c7d-gmm7n\" (UID: \"a16ecbae-a304-444d-b36c-c3e82a1332a1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.697591 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.710378 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzfnt\" (UniqueName: \"kubernetes.io/projected/1f802986-f97c-4813-9aec-d48d43eeedae-kube-api-access-xzfnt\") pod \"machine-api-operator-5694c8668f-r576m\" (UID: \"1f802986-f97c-4813-9aec-d48d43eeedae\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.710661 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.736026 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4983cdcb-9bb3-41d2-9164-f24ee5753562-ca-trust-extracted\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.736084 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-trusted-ca\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.736119 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4983cdcb-9bb3-41d2-9164-f24ee5753562-installation-pull-secrets\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.736147 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-certificates\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.736181 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-bound-sa-token\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.736234 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.736264 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-tls\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.736291 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l7kf\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-kube-api-access-5l7kf\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: E0204 11:30:02.736839 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:03.236820608 +0000 UTC m=+152.379524993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.751439 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.766141 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.773154 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.789951 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.823848 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.837314 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.837811 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43618563-306c-4371-a92b-4baf4aa7e352-apiservice-cert\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.837883 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e3d50ac-921a-4d44-b6ed-cfc7709f4863-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bvvbh\" (UID: \"0e3d50ac-921a-4d44-b6ed-cfc7709f4863\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.837947 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/083fbf33-a605-4fbc-8bef-6ad1b73a8059-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s6rlf\" (UID: \"083fbf33-a605-4fbc-8bef-6ad1b73a8059\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.837970 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-etcd-client\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.837995 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/335a17f2-115c-479a-9dfb-01f13b079108-default-certificate\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838045 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2v2s5\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838068 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-socket-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838105 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43618563-306c-4371-a92b-4baf4aa7e352-webhook-cert\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838126 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dda38b8-011e-4cde-a88e-abdab857fe2f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d98w7\" (UID: \"9dda38b8-011e-4cde-a88e-abdab857fe2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838148 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c79m6\" (UniqueName: \"kubernetes.io/projected/69941123-11bc-4cf2-8a71-665627a08a99-kube-api-access-c79m6\") pod \"ingress-canary-njv76\" (UID: \"69941123-11bc-4cf2-8a71-665627a08a99\") " pod="openshift-ingress-canary/ingress-canary-njv76" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838170 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cef0962-392a-46f0-9bc4-14e6547d36c4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n58bn\" (UID: \"0cef0962-392a-46f0-9bc4-14e6547d36c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838189 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cef0962-392a-46f0-9bc4-14e6547d36c4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n58bn\" (UID: \"0cef0962-392a-46f0-9bc4-14e6547d36c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838214 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9gg4\" (UniqueName: \"kubernetes.io/projected/07b49a63-3679-433b-8b24-d2322125ccc9-kube-api-access-z9gg4\") pod \"migrator-59844c95c7-d96m4\" (UID: \"07b49a63-3679-433b-8b24-d2322125ccc9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838259 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bd76fb4c-1a31-412e-ae24-5c798e86178e-signing-cabundle\") pod \"service-ca-9c57cc56f-bnrcg\" (UID: \"bd76fb4c-1a31-412e-ae24-5c798e86178e\") " pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838281 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b56w\" (UniqueName: \"kubernetes.io/projected/d800f63b-2465-4553-aa78-99fff8f484bb-kube-api-access-7b56w\") pod \"collect-profiles-29503410-lkj8c\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838349 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-trusted-ca\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838419 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a7a6f641-6419-4f6f-be79-e558596dc1c1-certs\") pod \"machine-config-server-zrwsp\" (UID: \"a7a6f641-6419-4f6f-be79-e558596dc1c1\") " pod="openshift-machine-config-operator/machine-config-server-zrwsp" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838446 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgqs8\" (UniqueName: \"kubernetes.io/projected/052783b7-76c5-4c69-bc04-72230f147ee4-kube-api-access-vgqs8\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838480 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gds5\" (UniqueName: \"kubernetes.io/projected/51ce2604-c544-4207-8f53-daea97729643-kube-api-access-5gds5\") pod \"dns-default-5695c\" (UID: \"51ce2604-c544-4207-8f53-daea97729643\") " pod="openshift-dns/dns-default-5695c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838503 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx44m\" (UniqueName: \"kubernetes.io/projected/ca8423fa-0c80-4400-9a3b-5bab042ae353-kube-api-access-cx44m\") pod \"catalog-operator-68c6474976-k8snv\" (UID: \"ca8423fa-0c80-4400-9a3b-5bab042ae353\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838524 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hvqr\" (UniqueName: \"kubernetes.io/projected/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-kube-api-access-2hvqr\") pod \"marketplace-operator-79b997595-2v2s5\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838545 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-registration-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838590 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dda38b8-011e-4cde-a88e-abdab857fe2f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d98w7\" (UID: \"9dda38b8-011e-4cde-a88e-abdab857fe2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838622 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-certificates\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838665 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-config\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838687 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/335a17f2-115c-479a-9dfb-01f13b079108-metrics-certs\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838705 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv62c\" (UniqueName: \"kubernetes.io/projected/335a17f2-115c-479a-9dfb-01f13b079108-kube-api-access-hv62c\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838799 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dda38b8-011e-4cde-a88e-abdab857fe2f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d98w7\" (UID: \"9dda38b8-011e-4cde-a88e-abdab857fe2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838824 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jq69\" (UniqueName: \"kubernetes.io/projected/a7a6f641-6419-4f6f-be79-e558596dc1c1-kube-api-access-8jq69\") pod \"machine-config-server-zrwsp\" (UID: \"a7a6f641-6419-4f6f-be79-e558596dc1c1\") " pod="openshift-machine-config-operator/machine-config-server-zrwsp" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838879 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-csi-data-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838953 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d422575c-0503-4a8b-aa39-f8131db07fbd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zchf4\" (UID: \"d422575c-0503-4a8b-aa39-f8131db07fbd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838977 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a7a6f641-6419-4f6f-be79-e558596dc1c1-node-bootstrap-token\") pod \"machine-config-server-zrwsp\" (UID: \"a7a6f641-6419-4f6f-be79-e558596dc1c1\") " pod="openshift-machine-config-operator/machine-config-server-zrwsp" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.838996 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-mountpoint-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839044 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/43618563-306c-4371-a92b-4baf4aa7e352-tmpfs\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839067 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/052783b7-76c5-4c69-bc04-72230f147ee4-metrics-tls\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839162 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r59c9\" (UniqueName: \"kubernetes.io/projected/7b16e96d-9a89-4901-af25-b15ac64ffe90-kube-api-access-r59c9\") pod \"package-server-manager-789f6589d5-z5qx6\" (UID: \"7b16e96d-9a89-4901-af25-b15ac64ffe90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839187 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d800f63b-2465-4553-aa78-99fff8f484bb-config-volume\") pod \"collect-profiles-29503410-lkj8c\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839286 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9q28\" (UniqueName: \"kubernetes.io/projected/083fbf33-a605-4fbc-8bef-6ad1b73a8059-kube-api-access-r9q28\") pod \"multus-admission-controller-857f4d67dd-s6rlf\" (UID: \"083fbf33-a605-4fbc-8bef-6ad1b73a8059\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839453 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/51ce2604-c544-4207-8f53-daea97729643-metrics-tls\") pod \"dns-default-5695c\" (UID: \"51ce2604-c544-4207-8f53-daea97729643\") " pod="openshift-dns/dns-default-5695c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839480 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ca8423fa-0c80-4400-9a3b-5bab042ae353-srv-cert\") pod \"catalog-operator-68c6474976-k8snv\" (UID: \"ca8423fa-0c80-4400-9a3b-5bab042ae353\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839501 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ca8423fa-0c80-4400-9a3b-5bab042ae353-profile-collector-cert\") pod \"catalog-operator-68c6474976-k8snv\" (UID: \"ca8423fa-0c80-4400-9a3b-5bab042ae353\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839592 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7e87130-d830-4391-a2b3-40d2b63149e2-metrics-tls\") pod \"dns-operator-744455d44c-qdvrf\" (UID: \"a7e87130-d830-4391-a2b3-40d2b63149e2\") " pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839615 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/335a17f2-115c-479a-9dfb-01f13b079108-stats-auth\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839636 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/052783b7-76c5-4c69-bc04-72230f147ee4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839659 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d800f63b-2465-4553-aa78-99fff8f484bb-secret-volume\") pod \"collect-profiles-29503410-lkj8c\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839681 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-serving-cert\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.839795 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69941123-11bc-4cf2-8a71-665627a08a99-cert\") pod \"ingress-canary-njv76\" (UID: \"69941123-11bc-4cf2-8a71-665627a08a99\") " pod="openshift-ingress-canary/ingress-canary-njv76" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840128 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjhzk\" (UniqueName: \"kubernetes.io/projected/cb2c7326-dd1e-481c-ad3f-c8f884d636b1-kube-api-access-rjhzk\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvhf8\" (UID: \"cb2c7326-dd1e-481c-ad3f-c8f884d636b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840182 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4983cdcb-9bb3-41d2-9164-f24ee5753562-ca-trust-extracted\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840210 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxgw\" (UniqueName: \"kubernetes.io/projected/d422575c-0503-4a8b-aa39-f8131db07fbd-kube-api-access-spxgw\") pod \"olm-operator-6b444d44fb-zchf4\" (UID: \"d422575c-0503-4a8b-aa39-f8131db07fbd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840268 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d422575c-0503-4a8b-aa39-f8131db07fbd-srv-cert\") pod \"olm-operator-6b444d44fb-zchf4\" (UID: \"d422575c-0503-4a8b-aa39-f8131db07fbd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840289 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b16e96d-9a89-4901-af25-b15ac64ffe90-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-z5qx6\" (UID: \"7b16e96d-9a89-4901-af25-b15ac64ffe90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840311 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-plugins-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840335 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb2c7326-dd1e-481c-ad3f-c8f884d636b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvhf8\" (UID: \"cb2c7326-dd1e-481c-ad3f-c8f884d636b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840358 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/335a17f2-115c-479a-9dfb-01f13b079108-service-ca-bundle\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840396 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4983cdcb-9bb3-41d2-9164-f24ee5753562-installation-pull-secrets\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840419 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b84w9\" (UniqueName: \"kubernetes.io/projected/f58ac097-02b6-4c5c-a670-62f8fcdc5853-kube-api-access-b84w9\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840440 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/052783b7-76c5-4c69-bc04-72230f147ee4-trusted-ca\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840698 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51ce2604-c544-4207-8f53-daea97729643-config-volume\") pod \"dns-default-5695c\" (UID: \"51ce2604-c544-4207-8f53-daea97729643\") " pod="openshift-dns/dns-default-5695c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840728 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzscn\" (UniqueName: \"kubernetes.io/projected/bd76fb4c-1a31-412e-ae24-5c798e86178e-kube-api-access-bzscn\") pod \"service-ca-9c57cc56f-bnrcg\" (UID: \"bd76fb4c-1a31-412e-ae24-5c798e86178e\") " pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840794 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-etcd-service-ca\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.840946 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48d5n\" (UniqueName: \"kubernetes.io/projected/43618563-306c-4371-a92b-4baf4aa7e352-kube-api-access-48d5n\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.841205 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2v2s5\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.841253 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-bound-sa-token\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.841396 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e3d50ac-921a-4d44-b6ed-cfc7709f4863-proxy-tls\") pod \"machine-config-controller-84d6567774-bvvbh\" (UID: \"0e3d50ac-921a-4d44-b6ed-cfc7709f4863\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.841469 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cef0962-392a-46f0-9bc4-14e6547d36c4-config\") pod \"kube-controller-manager-operator-78b949d7b-n58bn\" (UID: \"0cef0962-392a-46f0-9bc4-14e6547d36c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.841493 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrsjk\" (UniqueName: \"kubernetes.io/projected/0e3d50ac-921a-4d44-b6ed-cfc7709f4863-kube-api-access-vrsjk\") pod \"machine-config-controller-84d6567774-bvvbh\" (UID: \"0e3d50ac-921a-4d44-b6ed-cfc7709f4863\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.841555 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmcw\" (UniqueName: \"kubernetes.io/projected/a7e87130-d830-4391-a2b3-40d2b63149e2-kube-api-access-4zmcw\") pod \"dns-operator-744455d44c-qdvrf\" (UID: \"a7e87130-d830-4391-a2b3-40d2b63149e2\") " pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.841680 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-etcd-ca\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.841706 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv5ns\" (UniqueName: \"kubernetes.io/projected/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-kube-api-access-wv5ns\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.841733 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-tls\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.841775 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bd76fb4c-1a31-412e-ae24-5c798e86178e-signing-key\") pod \"service-ca-9c57cc56f-bnrcg\" (UID: \"bd76fb4c-1a31-412e-ae24-5c798e86178e\") " pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.841875 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l7kf\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-kube-api-access-5l7kf\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.844358 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4983cdcb-9bb3-41d2-9164-f24ee5753562-ca-trust-extracted\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.845860 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-certificates\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.852204 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-trusted-ca\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: E0204 11:30:02.853012 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:03.352960554 +0000 UTC m=+152.495665089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.854740 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-tls\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.855410 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4983cdcb-9bb3-41d2-9164-f24ee5753562-installation-pull-secrets\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.890179 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l7kf\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-kube-api-access-5l7kf\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.909068 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-bound-sa-token\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.919557 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" event={"ID":"031aab71-b01b-4173-a760-dc26e36374ae","Type":"ContainerStarted","Data":"13e4518e922737a2a9ed031ef86d5ae35a956cb85a9572838f53e1ee3e0236b1"} Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.927603 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.929734 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6hr78"] Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.931228 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" event={"ID":"56aef793-e703-49e3-b6c5-b07e9610b661","Type":"ContainerStarted","Data":"6918ae5897b25365cd5625aa03cf46e8a47f7c885133581ca6846bc570b2ca76"} Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945150 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2v2s5\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945214 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e3d50ac-921a-4d44-b6ed-cfc7709f4863-proxy-tls\") pod \"machine-config-controller-84d6567774-bvvbh\" (UID: \"0e3d50ac-921a-4d44-b6ed-cfc7709f4863\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945240 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cef0962-392a-46f0-9bc4-14e6547d36c4-config\") pod \"kube-controller-manager-operator-78b949d7b-n58bn\" (UID: \"0cef0962-392a-46f0-9bc4-14e6547d36c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945263 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmcw\" (UniqueName: \"kubernetes.io/projected/a7e87130-d830-4391-a2b3-40d2b63149e2-kube-api-access-4zmcw\") pod \"dns-operator-744455d44c-qdvrf\" (UID: \"a7e87130-d830-4391-a2b3-40d2b63149e2\") " pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945289 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrsjk\" (UniqueName: \"kubernetes.io/projected/0e3d50ac-921a-4d44-b6ed-cfc7709f4863-kube-api-access-vrsjk\") pod \"machine-config-controller-84d6567774-bvvbh\" (UID: \"0e3d50ac-921a-4d44-b6ed-cfc7709f4863\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945356 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945380 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-etcd-ca\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945404 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv5ns\" (UniqueName: \"kubernetes.io/projected/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-kube-api-access-wv5ns\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945427 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bd76fb4c-1a31-412e-ae24-5c798e86178e-signing-key\") pod \"service-ca-9c57cc56f-bnrcg\" (UID: \"bd76fb4c-1a31-412e-ae24-5c798e86178e\") " pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945453 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43618563-306c-4371-a92b-4baf4aa7e352-apiservice-cert\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945476 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/083fbf33-a605-4fbc-8bef-6ad1b73a8059-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s6rlf\" (UID: \"083fbf33-a605-4fbc-8bef-6ad1b73a8059\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945501 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-etcd-client\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945526 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e3d50ac-921a-4d44-b6ed-cfc7709f4863-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bvvbh\" (UID: \"0e3d50ac-921a-4d44-b6ed-cfc7709f4863\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945554 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/335a17f2-115c-479a-9dfb-01f13b079108-default-certificate\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945578 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2v2s5\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945603 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-socket-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945631 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43618563-306c-4371-a92b-4baf4aa7e352-webhook-cert\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945654 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dda38b8-011e-4cde-a88e-abdab857fe2f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d98w7\" (UID: \"9dda38b8-011e-4cde-a88e-abdab857fe2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945683 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cef0962-392a-46f0-9bc4-14e6547d36c4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n58bn\" (UID: \"0cef0962-392a-46f0-9bc4-14e6547d36c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945706 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cef0962-392a-46f0-9bc4-14e6547d36c4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n58bn\" (UID: \"0cef0962-392a-46f0-9bc4-14e6547d36c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945730 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c79m6\" (UniqueName: \"kubernetes.io/projected/69941123-11bc-4cf2-8a71-665627a08a99-kube-api-access-c79m6\") pod \"ingress-canary-njv76\" (UID: \"69941123-11bc-4cf2-8a71-665627a08a99\") " pod="openshift-ingress-canary/ingress-canary-njv76" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945795 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9gg4\" (UniqueName: \"kubernetes.io/projected/07b49a63-3679-433b-8b24-d2322125ccc9-kube-api-access-z9gg4\") pod \"migrator-59844c95c7-d96m4\" (UID: \"07b49a63-3679-433b-8b24-d2322125ccc9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945820 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bd76fb4c-1a31-412e-ae24-5c798e86178e-signing-cabundle\") pod \"service-ca-9c57cc56f-bnrcg\" (UID: \"bd76fb4c-1a31-412e-ae24-5c798e86178e\") " pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945841 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b56w\" (UniqueName: \"kubernetes.io/projected/d800f63b-2465-4553-aa78-99fff8f484bb-kube-api-access-7b56w\") pod \"collect-profiles-29503410-lkj8c\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:02 crc kubenswrapper[4728]: E0204 11:30:02.945856 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:03.445841053 +0000 UTC m=+152.588545438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945891 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a7a6f641-6419-4f6f-be79-e558596dc1c1-certs\") pod \"machine-config-server-zrwsp\" (UID: \"a7a6f641-6419-4f6f-be79-e558596dc1c1\") " pod="openshift-machine-config-operator/machine-config-server-zrwsp" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945927 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgqs8\" (UniqueName: \"kubernetes.io/projected/052783b7-76c5-4c69-bc04-72230f147ee4-kube-api-access-vgqs8\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945958 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx44m\" (UniqueName: \"kubernetes.io/projected/ca8423fa-0c80-4400-9a3b-5bab042ae353-kube-api-access-cx44m\") pod \"catalog-operator-68c6474976-k8snv\" (UID: \"ca8423fa-0c80-4400-9a3b-5bab042ae353\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.945981 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hvqr\" (UniqueName: \"kubernetes.io/projected/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-kube-api-access-2hvqr\") pod \"marketplace-operator-79b997595-2v2s5\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946005 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gds5\" (UniqueName: \"kubernetes.io/projected/51ce2604-c544-4207-8f53-daea97729643-kube-api-access-5gds5\") pod \"dns-default-5695c\" (UID: \"51ce2604-c544-4207-8f53-daea97729643\") " pod="openshift-dns/dns-default-5695c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946027 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-registration-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946056 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dda38b8-011e-4cde-a88e-abdab857fe2f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d98w7\" (UID: \"9dda38b8-011e-4cde-a88e-abdab857fe2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946083 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-config\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946105 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/335a17f2-115c-479a-9dfb-01f13b079108-metrics-certs\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946125 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv62c\" (UniqueName: \"kubernetes.io/projected/335a17f2-115c-479a-9dfb-01f13b079108-kube-api-access-hv62c\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946146 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dda38b8-011e-4cde-a88e-abdab857fe2f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d98w7\" (UID: \"9dda38b8-011e-4cde-a88e-abdab857fe2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946165 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jq69\" (UniqueName: \"kubernetes.io/projected/a7a6f641-6419-4f6f-be79-e558596dc1c1-kube-api-access-8jq69\") pod \"machine-config-server-zrwsp\" (UID: \"a7a6f641-6419-4f6f-be79-e558596dc1c1\") " pod="openshift-machine-config-operator/machine-config-server-zrwsp" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946186 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-csi-data-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946206 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d422575c-0503-4a8b-aa39-f8131db07fbd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zchf4\" (UID: \"d422575c-0503-4a8b-aa39-f8131db07fbd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946226 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/43618563-306c-4371-a92b-4baf4aa7e352-tmpfs\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946243 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a7a6f641-6419-4f6f-be79-e558596dc1c1-node-bootstrap-token\") pod \"machine-config-server-zrwsp\" (UID: \"a7a6f641-6419-4f6f-be79-e558596dc1c1\") " pod="openshift-machine-config-operator/machine-config-server-zrwsp" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946258 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-mountpoint-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946277 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/052783b7-76c5-4c69-bc04-72230f147ee4-metrics-tls\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946296 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r59c9\" (UniqueName: \"kubernetes.io/projected/7b16e96d-9a89-4901-af25-b15ac64ffe90-kube-api-access-r59c9\") pod \"package-server-manager-789f6589d5-z5qx6\" (UID: \"7b16e96d-9a89-4901-af25-b15ac64ffe90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946317 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d800f63b-2465-4553-aa78-99fff8f484bb-config-volume\") pod \"collect-profiles-29503410-lkj8c\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946342 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9q28\" (UniqueName: \"kubernetes.io/projected/083fbf33-a605-4fbc-8bef-6ad1b73a8059-kube-api-access-r9q28\") pod \"multus-admission-controller-857f4d67dd-s6rlf\" (UID: \"083fbf33-a605-4fbc-8bef-6ad1b73a8059\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946360 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/51ce2604-c544-4207-8f53-daea97729643-metrics-tls\") pod \"dns-default-5695c\" (UID: \"51ce2604-c544-4207-8f53-daea97729643\") " pod="openshift-dns/dns-default-5695c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946374 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ca8423fa-0c80-4400-9a3b-5bab042ae353-srv-cert\") pod \"catalog-operator-68c6474976-k8snv\" (UID: \"ca8423fa-0c80-4400-9a3b-5bab042ae353\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946392 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ca8423fa-0c80-4400-9a3b-5bab042ae353-profile-collector-cert\") pod \"catalog-operator-68c6474976-k8snv\" (UID: \"ca8423fa-0c80-4400-9a3b-5bab042ae353\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946414 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/335a17f2-115c-479a-9dfb-01f13b079108-stats-auth\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946430 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7e87130-d830-4391-a2b3-40d2b63149e2-metrics-tls\") pod \"dns-operator-744455d44c-qdvrf\" (UID: \"a7e87130-d830-4391-a2b3-40d2b63149e2\") " pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946455 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d800f63b-2465-4553-aa78-99fff8f484bb-secret-volume\") pod \"collect-profiles-29503410-lkj8c\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946471 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-serving-cert\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946485 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/052783b7-76c5-4c69-bc04-72230f147ee4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946507 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69941123-11bc-4cf2-8a71-665627a08a99-cert\") pod \"ingress-canary-njv76\" (UID: \"69941123-11bc-4cf2-8a71-665627a08a99\") " pod="openshift-ingress-canary/ingress-canary-njv76" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946522 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjhzk\" (UniqueName: \"kubernetes.io/projected/cb2c7326-dd1e-481c-ad3f-c8f884d636b1-kube-api-access-rjhzk\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvhf8\" (UID: \"cb2c7326-dd1e-481c-ad3f-c8f884d636b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946543 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spxgw\" (UniqueName: \"kubernetes.io/projected/d422575c-0503-4a8b-aa39-f8131db07fbd-kube-api-access-spxgw\") pod \"olm-operator-6b444d44fb-zchf4\" (UID: \"d422575c-0503-4a8b-aa39-f8131db07fbd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946560 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d422575c-0503-4a8b-aa39-f8131db07fbd-srv-cert\") pod \"olm-operator-6b444d44fb-zchf4\" (UID: \"d422575c-0503-4a8b-aa39-f8131db07fbd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946576 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb2c7326-dd1e-481c-ad3f-c8f884d636b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvhf8\" (UID: \"cb2c7326-dd1e-481c-ad3f-c8f884d636b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946595 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/335a17f2-115c-479a-9dfb-01f13b079108-service-ca-bundle\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946610 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b16e96d-9a89-4901-af25-b15ac64ffe90-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-z5qx6\" (UID: \"7b16e96d-9a89-4901-af25-b15ac64ffe90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946624 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-plugins-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946645 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b84w9\" (UniqueName: \"kubernetes.io/projected/f58ac097-02b6-4c5c-a670-62f8fcdc5853-kube-api-access-b84w9\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946661 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/052783b7-76c5-4c69-bc04-72230f147ee4-trusted-ca\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946684 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51ce2604-c544-4207-8f53-daea97729643-config-volume\") pod \"dns-default-5695c\" (UID: \"51ce2604-c544-4207-8f53-daea97729643\") " pod="openshift-dns/dns-default-5695c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946704 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzscn\" (UniqueName: \"kubernetes.io/projected/bd76fb4c-1a31-412e-ae24-5c798e86178e-kube-api-access-bzscn\") pod \"service-ca-9c57cc56f-bnrcg\" (UID: \"bd76fb4c-1a31-412e-ae24-5c798e86178e\") " pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946718 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-etcd-service-ca\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946740 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48d5n\" (UniqueName: \"kubernetes.io/projected/43618563-306c-4371-a92b-4baf4aa7e352-kube-api-access-48d5n\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.947579 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bd76fb4c-1a31-412e-ae24-5c798e86178e-signing-cabundle\") pod \"service-ca-9c57cc56f-bnrcg\" (UID: \"bd76fb4c-1a31-412e-ae24-5c798e86178e\") " pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.951154 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2v2s5\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.953847 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cef0962-392a-46f0-9bc4-14e6547d36c4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n58bn\" (UID: \"0cef0962-392a-46f0-9bc4-14e6547d36c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.954468 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dda38b8-011e-4cde-a88e-abdab857fe2f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d98w7\" (UID: \"9dda38b8-011e-4cde-a88e-abdab857fe2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.956027 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-socket-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.959368 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e3d50ac-921a-4d44-b6ed-cfc7709f4863-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bvvbh\" (UID: \"0e3d50ac-921a-4d44-b6ed-cfc7709f4863\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.960960 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d800f63b-2465-4553-aa78-99fff8f484bb-config-volume\") pod \"collect-profiles-29503410-lkj8c\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.963129 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-plugins-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.963213 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-csv4c"] Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.946682 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-etcd-ca\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.965401 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51ce2604-c544-4207-8f53-daea97729643-config-volume\") pod \"dns-default-5695c\" (UID: \"51ce2604-c544-4207-8f53-daea97729643\") " pod="openshift-dns/dns-default-5695c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.965446 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/335a17f2-115c-479a-9dfb-01f13b079108-service-ca-bundle\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.965789 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-registration-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.965967 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cef0962-392a-46f0-9bc4-14e6547d36c4-config\") pod \"kube-controller-manager-operator-78b949d7b-n58bn\" (UID: \"0cef0962-392a-46f0-9bc4-14e6547d36c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.966498 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/052783b7-76c5-4c69-bc04-72230f147ee4-trusted-ca\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.967058 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e3d50ac-921a-4d44-b6ed-cfc7709f4863-proxy-tls\") pod \"machine-config-controller-84d6567774-bvvbh\" (UID: \"0e3d50ac-921a-4d44-b6ed-cfc7709f4863\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.967776 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-config\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.968140 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/335a17f2-115c-479a-9dfb-01f13b079108-default-certificate\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.968536 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/51ce2604-c544-4207-8f53-daea97729643-metrics-tls\") pod \"dns-default-5695c\" (UID: \"51ce2604-c544-4207-8f53-daea97729643\") " pod="openshift-dns/dns-default-5695c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.969235 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bd76fb4c-1a31-412e-ae24-5c798e86178e-signing-key\") pod \"service-ca-9c57cc56f-bnrcg\" (UID: \"bd76fb4c-1a31-412e-ae24-5c798e86178e\") " pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.970940 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43618563-306c-4371-a92b-4baf4aa7e352-apiservice-cert\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.971099 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-etcd-service-ca\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.972261 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2v2s5\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.972387 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-csi-data-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.972448 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f58ac097-02b6-4c5c-a670-62f8fcdc5853-mountpoint-dir\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.972734 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.973541 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/43618563-306c-4371-a92b-4baf4aa7e352-tmpfs\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.976892 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" event={"ID":"0352f14b-41bc-4c68-961b-51b6a4cc7a53","Type":"ContainerStarted","Data":"5ced878749be695430bbd97083c3ed47c9eb2065e52dfd3cf422070d16f1d1b2"} Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.977574 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" event={"ID":"0352f14b-41bc-4c68-961b-51b6a4cc7a53","Type":"ContainerStarted","Data":"2ba464c417feefa35d3fe96309ac80aff17eddad1006ccfe4b011f90677c1949"} Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.978607 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d422575c-0503-4a8b-aa39-f8131db07fbd-srv-cert\") pod \"olm-operator-6b444d44fb-zchf4\" (UID: \"d422575c-0503-4a8b-aa39-f8131db07fbd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.978631 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69941123-11bc-4cf2-8a71-665627a08a99-cert\") pod \"ingress-canary-njv76\" (UID: \"69941123-11bc-4cf2-8a71-665627a08a99\") " pod="openshift-ingress-canary/ingress-canary-njv76" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.978889 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-serving-cert\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.979226 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" event={"ID":"57ce51dd-e252-4911-aee9-d4755db74869","Type":"ContainerStarted","Data":"c0a4d3236960d659969b9a24542fb194904f7551ebd07076aeb056d97db508e9"} Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.980246 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-etcd-client\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.981449 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/083fbf33-a605-4fbc-8bef-6ad1b73a8059-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s6rlf\" (UID: \"083fbf33-a605-4fbc-8bef-6ad1b73a8059\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.986108 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cef0962-392a-46f0-9bc4-14e6547d36c4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n58bn\" (UID: \"0cef0962-392a-46f0-9bc4-14e6547d36c4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.986303 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/335a17f2-115c-479a-9dfb-01f13b079108-metrics-certs\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.986910 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb2c7326-dd1e-481c-ad3f-c8f884d636b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvhf8\" (UID: \"cb2c7326-dd1e-481c-ad3f-c8f884d636b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.986982 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ca8423fa-0c80-4400-9a3b-5bab042ae353-srv-cert\") pod \"catalog-operator-68c6474976-k8snv\" (UID: \"ca8423fa-0c80-4400-9a3b-5bab042ae353\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.987748 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43618563-306c-4371-a92b-4baf4aa7e352-webhook-cert\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.988100 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a7a6f641-6419-4f6f-be79-e558596dc1c1-node-bootstrap-token\") pod \"machine-config-server-zrwsp\" (UID: \"a7a6f641-6419-4f6f-be79-e558596dc1c1\") " pod="openshift-machine-config-operator/machine-config-server-zrwsp" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.988287 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a7a6f641-6419-4f6f-be79-e558596dc1c1-certs\") pod \"machine-config-server-zrwsp\" (UID: \"a7a6f641-6419-4f6f-be79-e558596dc1c1\") " pod="openshift-machine-config-operator/machine-config-server-zrwsp" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.988717 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ca8423fa-0c80-4400-9a3b-5bab042ae353-profile-collector-cert\") pod \"catalog-operator-68c6474976-k8snv\" (UID: \"ca8423fa-0c80-4400-9a3b-5bab042ae353\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.989436 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d422575c-0503-4a8b-aa39-f8131db07fbd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zchf4\" (UID: \"d422575c-0503-4a8b-aa39-f8131db07fbd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.990863 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/052783b7-76c5-4c69-bc04-72230f147ee4-metrics-tls\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.991153 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d800f63b-2465-4553-aa78-99fff8f484bb-secret-volume\") pod \"collect-profiles-29503410-lkj8c\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.991245 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7e87130-d830-4391-a2b3-40d2b63149e2-metrics-tls\") pod \"dns-operator-744455d44c-qdvrf\" (UID: \"a7e87130-d830-4391-a2b3-40d2b63149e2\") " pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.992410 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dda38b8-011e-4cde-a88e-abdab857fe2f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d98w7\" (UID: \"9dda38b8-011e-4cde-a88e-abdab857fe2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.993159 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b56w\" (UniqueName: \"kubernetes.io/projected/d800f63b-2465-4553-aa78-99fff8f484bb-kube-api-access-7b56w\") pod \"collect-profiles-29503410-lkj8c\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.993922 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/335a17f2-115c-479a-9dfb-01f13b079108-stats-auth\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.995036 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" Feb 04 11:30:02 crc kubenswrapper[4728]: I0204 11:30:02.995845 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b16e96d-9a89-4901-af25-b15ac64ffe90-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-z5qx6\" (UID: \"7b16e96d-9a89-4901-af25-b15ac64ffe90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.010562 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c79m6\" (UniqueName: \"kubernetes.io/projected/69941123-11bc-4cf2-8a71-665627a08a99-kube-api-access-c79m6\") pod \"ingress-canary-njv76\" (UID: \"69941123-11bc-4cf2-8a71-665627a08a99\") " pod="openshift-ingress-canary/ingress-canary-njv76" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.034998 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9gg4\" (UniqueName: \"kubernetes.io/projected/07b49a63-3679-433b-8b24-d2322125ccc9-kube-api-access-z9gg4\") pod \"migrator-59844c95c7-d96m4\" (UID: \"07b49a63-3679-433b-8b24-d2322125ccc9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.068029 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.070049 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:03.569995852 +0000 UTC m=+152.712700237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.071793 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.073527 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48d5n\" (UniqueName: \"kubernetes.io/projected/43618563-306c-4371-a92b-4baf4aa7e352-kube-api-access-48d5n\") pod \"packageserver-d55dfcdfc-lbdwv\" (UID: \"43618563-306c-4371-a92b-4baf4aa7e352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.074780 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.075392 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:03.575375173 +0000 UTC m=+152.718079558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.098510 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv5ns\" (UniqueName: \"kubernetes.io/projected/9e0e9f45-e348-47ad-8de3-b3a1d60eeeac-kube-api-access-wv5ns\") pod \"etcd-operator-b45778765-4k2zf\" (UID: \"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.099553 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.116221 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r59c9\" (UniqueName: \"kubernetes.io/projected/7b16e96d-9a89-4901-af25-b15ac64ffe90-kube-api-access-r59c9\") pod \"package-server-manager-789f6589d5-z5qx6\" (UID: \"7b16e96d-9a89-4901-af25-b15ac64ffe90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.120500 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq"] Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.123298 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.126421 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjhzk\" (UniqueName: \"kubernetes.io/projected/cb2c7326-dd1e-481c-ad3f-c8f884d636b1-kube-api-access-rjhzk\") pod \"control-plane-machine-set-operator-78cbb6b69f-cvhf8\" (UID: \"cb2c7326-dd1e-481c-ad3f-c8f884d636b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.141834 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.151061 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9q28\" (UniqueName: \"kubernetes.io/projected/083fbf33-a605-4fbc-8bef-6ad1b73a8059-kube-api-access-r9q28\") pod \"multus-admission-controller-857f4d67dd-s6rlf\" (UID: \"083fbf33-a605-4fbc-8bef-6ad1b73a8059\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.163288 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.164056 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.175358 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.175822 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:03.675802503 +0000 UTC m=+152.818506888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.178356 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dda38b8-011e-4cde-a88e-abdab857fe2f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d98w7\" (UID: \"9dda38b8-011e-4cde-a88e-abdab857fe2f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.190113 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.202544 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spxgw\" (UniqueName: \"kubernetes.io/projected/d422575c-0503-4a8b-aa39-f8131db07fbd-kube-api-access-spxgw\") pod \"olm-operator-6b444d44fb-zchf4\" (UID: \"d422575c-0503-4a8b-aa39-f8131db07fbd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.210107 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b84w9\" (UniqueName: \"kubernetes.io/projected/f58ac097-02b6-4c5c-a670-62f8fcdc5853-kube-api-access-b84w9\") pod \"csi-hostpathplugin-x6j2r\" (UID: \"f58ac097-02b6-4c5c-a670-62f8fcdc5853\") " pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.210543 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.223896 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj"] Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.227842 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.227985 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zmcw\" (UniqueName: \"kubernetes.io/projected/a7e87130-d830-4391-a2b3-40d2b63149e2-kube-api-access-4zmcw\") pod \"dns-operator-744455d44c-qdvrf\" (UID: \"a7e87130-d830-4391-a2b3-40d2b63149e2\") " pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.246601 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.251423 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l"] Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.258234 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.271652 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrsjk\" (UniqueName: \"kubernetes.io/projected/0e3d50ac-921a-4d44-b6ed-cfc7709f4863-kube-api-access-vrsjk\") pod \"machine-config-controller-84d6567774-bvvbh\" (UID: \"0e3d50ac-921a-4d44-b6ed-cfc7709f4863\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.276677 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzscn\" (UniqueName: \"kubernetes.io/projected/bd76fb4c-1a31-412e-ae24-5c798e86178e-kube-api-access-bzscn\") pod \"service-ca-9c57cc56f-bnrcg\" (UID: \"bd76fb4c-1a31-412e-ae24-5c798e86178e\") " pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.278882 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:03.778858563 +0000 UTC m=+152.921562948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.279592 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-njv76" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.277897 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.300498 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgqs8\" (UniqueName: \"kubernetes.io/projected/052783b7-76c5-4c69-bc04-72230f147ee4-kube-api-access-vgqs8\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.326020 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx44m\" (UniqueName: \"kubernetes.io/projected/ca8423fa-0c80-4400-9a3b-5bab042ae353-kube-api-access-cx44m\") pod \"catalog-operator-68c6474976-k8snv\" (UID: \"ca8423fa-0c80-4400-9a3b-5bab042ae353\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.344684 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c4ckr"] Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.359256 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q"] Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.361053 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hvqr\" (UniqueName: \"kubernetes.io/projected/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-kube-api-access-2hvqr\") pod \"marketplace-operator-79b997595-2v2s5\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.381681 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gds5\" (UniqueName: \"kubernetes.io/projected/51ce2604-c544-4207-8f53-daea97729643-kube-api-access-5gds5\") pod \"dns-default-5695c\" (UID: \"51ce2604-c544-4207-8f53-daea97729643\") " pod="openshift-dns/dns-default-5695c" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.382727 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.383347 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:03.883327279 +0000 UTC m=+153.026031664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.394291 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv62c\" (UniqueName: \"kubernetes.io/projected/335a17f2-115c-479a-9dfb-01f13b079108-kube-api-access-hv62c\") pod \"router-default-5444994796-zm7m8\" (UID: \"335a17f2-115c-479a-9dfb-01f13b079108\") " pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.399472 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/052783b7-76c5-4c69-bc04-72230f147ee4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-skkqn\" (UID: \"052783b7-76c5-4c69-bc04-72230f147ee4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.412931 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.420600 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jq69\" (UniqueName: \"kubernetes.io/projected/a7a6f641-6419-4f6f-be79-e558596dc1c1-kube-api-access-8jq69\") pod \"machine-config-server-zrwsp\" (UID: \"a7a6f641-6419-4f6f-be79-e558596dc1c1\") " pod="openshift-machine-config-operator/machine-config-server-zrwsp" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.472966 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.480719 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.484068 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.484350 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:03.984336575 +0000 UTC m=+153.127040960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.500439 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.521241 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.568613 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.585141 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.585300 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.085267728 +0000 UTC m=+153.227972123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.585475 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.585658 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zrwsp" Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.586118 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.086089709 +0000 UTC m=+153.228794094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.591837 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5695c" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.625139 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4"] Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.667425 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tlnkw"] Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.686495 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.686663 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.186640732 +0000 UTC m=+153.329345127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.687013 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.687432 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.187421793 +0000 UTC m=+153.330126178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.689892 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.767591 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-skj7q"] Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.796968 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.797517 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.297496208 +0000 UTC m=+153.440200593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.809997 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l4qn4"] Feb 04 11:30:03 crc kubenswrapper[4728]: I0204 11:30:03.903859 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:03 crc kubenswrapper[4728]: E0204 11:30:03.905052 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.405035036 +0000 UTC m=+153.547739421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.005431 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.005905 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.505884467 +0000 UTC m=+153.648588852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.014128 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" event={"ID":"15f8fdf4-3a93-4957-ad97-a1a376d821cd","Type":"ContainerStarted","Data":"37d0b7ba7993499c98a6f1755994f5d5226496ed58df14b399313c2c99b58173"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.026018 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" event={"ID":"56aef793-e703-49e3-b6c5-b07e9610b661","Type":"ContainerStarted","Data":"349de61bea636899bbddfd0688e2c085da83e07133fc26575d6cc4e998158fb0"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.046883 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" event={"ID":"89739118-42f8-49bd-a5bf-f5f04e612dab","Type":"ContainerStarted","Data":"3f8dc7c0f1ed4a258398bfad3335b08979c63b832eef942c2f9c689098fe5e04"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.046946 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" event={"ID":"89739118-42f8-49bd-a5bf-f5f04e612dab","Type":"ContainerStarted","Data":"ee5bd8a3d044db406ab3ab8ebcb968becfdfc0e8472afe326c648532c78de456"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.111849 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.112172 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.612155371 +0000 UTC m=+153.754859756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.131222 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" event={"ID":"ccba369f-e378-4f2d-b733-f658edbd6c99","Type":"ContainerStarted","Data":"3f98815107fa464521db18dd5dc4b8b034e2a78cff0edbb8e1b731dcc7837498"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.133073 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4ckr" event={"ID":"86a5137c-eb55-438a-8e8d-99f2a2d4bf48","Type":"ContainerStarted","Data":"0f18ba12a390b3029b68df341aca4e29a1e7dc264a8ac5e0f826be6032b15cbd"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.146189 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" event={"ID":"f4f6aa38-43e0-4d04-a3a9-12b046d30937","Type":"ContainerStarted","Data":"6fcde9c13c13579222df1a082d4119ab10611aaebec42a2dc3bba1f34cf385cb"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.150652 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" event={"ID":"57ce51dd-e252-4911-aee9-d4755db74869","Type":"ContainerStarted","Data":"e0febb26aaf7543eb13c3bb2d2b90addbd21b08583a372f633c9c088ef5bb085"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.150715 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" event={"ID":"57ce51dd-e252-4911-aee9-d4755db74869","Type":"ContainerStarted","Data":"75de95320d8bbdcf86294760e77daa360d9a0425ef3b80a8a8ae13ca1c8f71fd"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.152936 4728 generic.go:334] "Generic (PLEG): container finished" podID="a4e9bf47-202b-4206-8758-a446e86d7a6b" containerID="1aca7db6ed73901399e41ac14e6168e32d30199e2fda491c4a94c6f2c187335a" exitCode=0 Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.152989 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" event={"ID":"a4e9bf47-202b-4206-8758-a446e86d7a6b","Type":"ContainerDied","Data":"1aca7db6ed73901399e41ac14e6168e32d30199e2fda491c4a94c6f2c187335a"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.153007 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" event={"ID":"a4e9bf47-202b-4206-8758-a446e86d7a6b","Type":"ContainerStarted","Data":"c56ece452fbd1e723b2da9284cd98583daa240a08f2aa1618f410c69e0ecd4de"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.156849 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-csv4c" event={"ID":"588eb6e9-6d28-438d-b881-ab944960aa79","Type":"ContainerStarted","Data":"e238109c29479c756207d6c887421de169c9e99617af03cdec10b810f71e0d92"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.156987 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-csv4c" event={"ID":"588eb6e9-6d28-438d-b881-ab944960aa79","Type":"ContainerStarted","Data":"b0359745eb15ebfa9dd73266980a5af94e80909fe5b9dd805f9fbabaed0a05a0"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.157599 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.160847 4728 patch_prober.go:28] interesting pod/console-operator-58897d9998-csv4c container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.161015 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-csv4c" podUID="588eb6e9-6d28-438d-b881-ab944960aa79" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.169285 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" event={"ID":"7e8cbf19-0e6b-43ce-996c-11b1776e6eae","Type":"ContainerStarted","Data":"f0868be1d5ad11df682494454cb97e3b88c8b7488b93ed73716da02980bf4902"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.174709 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" event={"ID":"033a175a-69ae-431f-8803-b2f5db11ee91","Type":"ContainerStarted","Data":"5096eeadc8168ee637e490581ee2348d7ae39f7813e3c4f9e08c7375b122f105"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.174795 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" event={"ID":"033a175a-69ae-431f-8803-b2f5db11ee91","Type":"ContainerStarted","Data":"836a1293b353ad14a0cccda1b0a34ea2aaefe2bc2da6e728cd8c4b37275ed85f"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.185838 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" event={"ID":"33b2bdb9-5bcb-4977-8722-3d2fa6f8e291","Type":"ContainerStarted","Data":"593a4c76aaf61d2f03b69efd86ccb558f025733ecfd9465addef7b257c687a75"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.194508 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" event={"ID":"031aab71-b01b-4173-a760-dc26e36374ae","Type":"ContainerStarted","Data":"fa8269af594ddc1be5340badb3132a1d778cf01ece4782471ded20ff5318659d"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.201312 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zm7m8" event={"ID":"335a17f2-115c-479a-9dfb-01f13b079108","Type":"ContainerStarted","Data":"a1a482cbce82af08cff2c04f1aefd790ff2d67c4f354afff608f2d050ed0a026"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.202902 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zrwsp" event={"ID":"a7a6f641-6419-4f6f-be79-e558596dc1c1","Type":"ContainerStarted","Data":"2ee98ea9dccd8361ac21842d34e8bc1ddacd0f48c78c89e0f20113a652ed4813"} Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.212685 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.212882 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.712846058 +0000 UTC m=+153.855550443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.213051 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.213504 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.713479554 +0000 UTC m=+153.856184019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.229254 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-6dpmf" podStartSLOduration=128.229218502 podStartE2EDuration="2m8.229218502s" podCreationTimestamp="2026-02-04 11:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:04.228098662 +0000 UTC m=+153.370803077" watchObservedRunningTime="2026-02-04 11:30:04.229218502 +0000 UTC m=+153.371922887" Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.315333 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.315493 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.815460015 +0000 UTC m=+153.958164400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.315908 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.316787 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.81677234 +0000 UTC m=+153.959476725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.420513 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.420787 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.920737973 +0000 UTC m=+154.063442358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.421368 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.421702 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:04.921689549 +0000 UTC m=+154.064393934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.506634 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd75l" podStartSLOduration=128.506617468 podStartE2EDuration="2m8.506617468s" podCreationTimestamp="2026-02-04 11:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:04.506236087 +0000 UTC m=+153.648940482" watchObservedRunningTime="2026-02-04 11:30:04.506617468 +0000 UTC m=+153.649321853" Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.510662 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4"] Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.527306 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.527605 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.027588413 +0000 UTC m=+154.170292798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.539398 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf"] Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.542134 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww"] Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.579466 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cmjx5"] Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.586230 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n"] Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.587848 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r576m"] Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.628458 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.628772 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.128746742 +0000 UTC m=+154.271451127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.729041 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.729207 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.229180502 +0000 UTC m=+154.371884877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.729329 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.729621 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.229610063 +0000 UTC m=+154.372314448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.817001 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4k2zf"] Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.821183 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8"] Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.835269 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.835574 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.3355588 +0000 UTC m=+154.478263185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.872465 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c"] Feb 04 11:30:04 crc kubenswrapper[4728]: I0204 11:30:04.937978 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:04 crc kubenswrapper[4728]: E0204 11:30:04.938413 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.438393812 +0000 UTC m=+154.581098197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.040562 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:05 crc kubenswrapper[4728]: E0204 11:30:05.043088 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.543068055 +0000 UTC m=+154.685772440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.064346 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pp8c9" podStartSLOduration=128.064323118 podStartE2EDuration="2m8.064323118s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.01683804 +0000 UTC m=+154.159542435" watchObservedRunningTime="2026-02-04 11:30:05.064323118 +0000 UTC m=+154.207027503" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.066365 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-njv76"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.075687 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s6rlf"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.095943 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bnrcg"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.118683 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.154652 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:05 crc kubenswrapper[4728]: E0204 11:30:05.155012 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.654999389 +0000 UTC m=+154.797703774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.253152 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.262435 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2v2s5"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.262852 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:05 crc kubenswrapper[4728]: E0204 11:30:05.263101 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.763067781 +0000 UTC m=+154.905772166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.263168 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:05 crc kubenswrapper[4728]: E0204 11:30:05.263618 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.763602755 +0000 UTC m=+154.906307140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.291412 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.302252 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qrbcq" podStartSLOduration=128.302234788 podStartE2EDuration="2m8.302234788s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.301126039 +0000 UTC m=+154.443830424" watchObservedRunningTime="2026-02-04 11:30:05.302234788 +0000 UTC m=+154.444939173" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.307710 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5695c"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.350223 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" podStartSLOduration=129.350204988 podStartE2EDuration="2m9.350204988s" podCreationTimestamp="2026-02-04 11:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.330287721 +0000 UTC m=+154.472992106" watchObservedRunningTime="2026-02-04 11:30:05.350204988 +0000 UTC m=+154.492909373" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.354358 4728 csr.go:261] certificate signing request csr-6b72d is approved, waiting to be issued Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.363994 4728 csr.go:257] certificate signing request csr-6b72d is issued Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.365325 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:05 crc kubenswrapper[4728]: E0204 11:30:05.365634 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.865619727 +0000 UTC m=+155.008324112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.380807 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.389734 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" event={"ID":"33b2bdb9-5bcb-4977-8722-3d2fa6f8e291","Type":"ContainerStarted","Data":"bbe8c2392b849f0ed810978d9dedb23fc287ce858c4439a34b8016fa3544a9fb"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.392693 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.401102 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.402259 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.419981 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-njv76" event={"ID":"69941123-11bc-4cf2-8a71-665627a08a99","Type":"ContainerStarted","Data":"a3163e61896eaa7b640cdb58731a4ee1ec31dd62534a490cddfe94816e332f72"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.425687 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w5kl7" podStartSLOduration=128.425672807 podStartE2EDuration="2m8.425672807s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.4106842 +0000 UTC m=+154.553388595" watchObservedRunningTime="2026-02-04 11:30:05.425672807 +0000 UTC m=+154.568377192" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.425701 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qdvrf"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.432370 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.433060 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x6j2r"] Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.436523 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-csv4c" podStartSLOduration=128.436506174 podStartE2EDuration="2m8.436506174s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.435089327 +0000 UTC m=+154.577793712" watchObservedRunningTime="2026-02-04 11:30:05.436506174 +0000 UTC m=+154.579210559" Feb 04 11:30:05 crc kubenswrapper[4728]: W0204 11:30:05.440172 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51ce2604_c544_4207_8f53_daea97729643.slice/crio-21b30b491c6cb92264651986eae2801ababc1da19a470202ca6e97ed053755ab WatchSource:0}: Error finding container 21b30b491c6cb92264651986eae2801ababc1da19a470202ca6e97ed053755ab: Status 404 returned error can't find the container with id 21b30b491c6cb92264651986eae2801ababc1da19a470202ca6e97ed053755ab Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.440999 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" event={"ID":"7b16e96d-9a89-4901-af25-b15ac64ffe90","Type":"ContainerStarted","Data":"a52697dd6771f08f5846b42f32b51264488eaa1cfb1e398721971ae6dbc8e523"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.449012 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.449076 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.464901 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" event={"ID":"15f8fdf4-3a93-4957-ad97-a1a376d821cd","Type":"ContainerStarted","Data":"6c26de13a8ecee907c50d456b9b05e24b59f1cea21560659c460185391e4b0a4"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.466211 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.467641 4728 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tlnkw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.467669 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" podUID="15f8fdf4-3a93-4957-ad97-a1a376d821cd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.469703 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l4qn4" event={"ID":"20be2be6-dc74-4404-b883-1ad4af94512b","Type":"ContainerStarted","Data":"295ec0f8d1c68df9fc578e4ef946eea991d84c3145e000f80ee2a043a9dec79a"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.469738 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l4qn4" event={"ID":"20be2be6-dc74-4404-b883-1ad4af94512b","Type":"ContainerStarted","Data":"86baabc85290a683f556cb1b5719ba1db2b59bf3f1f6afa6ec1bbee192ba95ed"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.470302 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l4qn4" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.474953 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:05 crc kubenswrapper[4728]: E0204 11:30:05.476004 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:05.97599074 +0000 UTC m=+155.118695125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.481401 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4f6aa38-43e0-4d04-a3a9-12b046d30937" containerID="fbb15f0424e29bfabbbe17a1a84c7e429f4185b818047cb8e3b55b2f0aad32fe" exitCode=0 Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.481491 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" event={"ID":"f4f6aa38-43e0-4d04-a3a9-12b046d30937","Type":"ContainerDied","Data":"fbb15f0424e29bfabbbe17a1a84c7e429f4185b818047cb8e3b55b2f0aad32fe"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.494168 4728 patch_prober.go:28] interesting pod/downloads-7954f5f757-l4qn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.494242 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l4qn4" podUID="20be2be6-dc74-4404-b883-1ad4af94512b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 04 11:30:05 crc kubenswrapper[4728]: W0204 11:30:05.507684 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dda38b8_011e_4cde_a88e_abdab857fe2f.slice/crio-421decb20fa03cfe66aa7a679026ca7b3548e180d447c5c77a6260a92f8ce004 WatchSource:0}: Error finding container 421decb20fa03cfe66aa7a679026ca7b3548e180d447c5c77a6260a92f8ce004: Status 404 returned error can't find the container with id 421decb20fa03cfe66aa7a679026ca7b3548e180d447c5c77a6260a92f8ce004 Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.538352 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zrwsp" event={"ID":"a7a6f641-6419-4f6f-be79-e558596dc1c1","Type":"ContainerStarted","Data":"e54ca16815851c909b3903f0a9a48f7710f4e51b09118eba126fd380619f009f"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.546162 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4" event={"ID":"07b49a63-3679-433b-8b24-d2322125ccc9","Type":"ContainerStarted","Data":"528977b93169f57d969e1af6125be539761a992d963d81fbd71c49d1c69b935d"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.546218 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4" event={"ID":"07b49a63-3679-433b-8b24-d2322125ccc9","Type":"ContainerStarted","Data":"4ab9505aa44cb94e7ba9f83bf5de5c0a6c2d1b6460f5cd15d6ef4d387a7b80cb"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.551213 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" podStartSLOduration=128.551189372 podStartE2EDuration="2m8.551189372s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.513469163 +0000 UTC m=+154.656173548" watchObservedRunningTime="2026-02-04 11:30:05.551189372 +0000 UTC m=+154.693893757" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.553388 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-l4qn4" podStartSLOduration=128.553375069 podStartE2EDuration="2m8.553375069s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.545806899 +0000 UTC m=+154.688511294" watchObservedRunningTime="2026-02-04 11:30:05.553375069 +0000 UTC m=+154.696079464" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.579142 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:05 crc kubenswrapper[4728]: E0204 11:30:05.580326 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:06.080299932 +0000 UTC m=+155.223004477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.602305 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zm7m8" podStartSLOduration=128.602285134 podStartE2EDuration="2m8.602285134s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.601393851 +0000 UTC m=+154.744098246" watchObservedRunningTime="2026-02-04 11:30:05.602285134 +0000 UTC m=+154.744989519" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.626343 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-zrwsp" podStartSLOduration=5.626320981 podStartE2EDuration="5.626320981s" podCreationTimestamp="2026-02-04 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.624562434 +0000 UTC m=+154.767266839" watchObservedRunningTime="2026-02-04 11:30:05.626320981 +0000 UTC m=+154.769025386" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.638805 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" event={"ID":"87039d42-443e-40f7-abe1-a6462556cc32","Type":"ContainerStarted","Data":"01f6e75f4885f5bbeb3814fb705eedc3b4decaf7071f25ff3b6cfe8a88a9567f"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.638983 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zm7m8" event={"ID":"335a17f2-115c-479a-9dfb-01f13b079108","Type":"ContainerStarted","Data":"ae37b5471a002b980d4e71e9f54ebdaef7d239ae4bf4f82f83d26c91d28c4253"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.639227 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" event={"ID":"a4e9bf47-202b-4206-8758-a446e86d7a6b","Type":"ContainerStarted","Data":"9a0582a826524b7b02be0a1596a69d5e08f45b734ee54410329ec90e58439863"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.639318 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" event={"ID":"d800f63b-2465-4553-aa78-99fff8f484bb","Type":"ContainerStarted","Data":"29ad144e4844acfa2275f690031f8d17538f9aeec53b09aa835141f9063f7008"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.639400 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" event={"ID":"083fbf33-a605-4fbc-8bef-6ad1b73a8059","Type":"ContainerStarted","Data":"507504525e4931533c65943c9d9a697a85d58fbe7bd23f501e125d7cfe392e27"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.639479 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" event={"ID":"a16ecbae-a304-444d-b36c-c3e82a1332a1","Type":"ContainerStarted","Data":"6838c8809066ffe16b3db1e0201b9d9893f6ac8d270846e742ed8f8f7680b981"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.647029 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" event={"ID":"3269cf72-ed95-40a4-84d6-74e53ea1c850","Type":"ContainerStarted","Data":"dec2e711b7976fcb2243c8e3fb28c0271d6611faac0686a9ef3ee4e9c4694a96"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.647081 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" event={"ID":"3269cf72-ed95-40a4-84d6-74e53ea1c850","Type":"ContainerStarted","Data":"e5dffc4184ff6f169d14fe6af1b8fa9b48f1fdea6bce7de7356a130f1c2a1964"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.650321 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" event={"ID":"bd76fb4c-1a31-412e-ae24-5c798e86178e","Type":"ContainerStarted","Data":"f2311fb9c59608d3e9e2ec30703404bf14a8f4b2cd0b5de335de808e23c498c3"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.651680 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" event={"ID":"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac","Type":"ContainerStarted","Data":"4d7375c4c4e7c5c11c17d678eba64cdf9665081ee270db5c5093c04dd6e06933"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.661043 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" event={"ID":"7e8cbf19-0e6b-43ce-996c-11b1776e6eae","Type":"ContainerStarted","Data":"3b32a31e12313106370347c8865020ab489ed62226494b9fc80e534f77f30a1d"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.664840 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8" event={"ID":"cb2c7326-dd1e-481c-ad3f-c8f884d636b1","Type":"ContainerStarted","Data":"71c57c8594a7b604f267ce3f49570c5bccbd281d53a1ab7506f14cea43f6087d"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.671450 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfnww" podStartSLOduration=128.671430466 podStartE2EDuration="2m8.671430466s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.670388088 +0000 UTC m=+154.813092473" watchObservedRunningTime="2026-02-04 11:30:05.671430466 +0000 UTC m=+154.814134861" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.682790 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:05 crc kubenswrapper[4728]: E0204 11:30:05.684114 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:06.184098212 +0000 UTC m=+155.326802597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.687065 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" event={"ID":"bdb06213-4bce-43d5-b16f-0bc09dc118fe","Type":"ContainerStarted","Data":"c6d26280ed7ab5931cbd67528bf6b0c7ddc3341dd56954401e1e0f10e947d967"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.687108 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" event={"ID":"bdb06213-4bce-43d5-b16f-0bc09dc118fe","Type":"ContainerStarted","Data":"12e1f50fa68087570d08450aaca0398799e0f5f063bb4b644ee33940017cf2c7"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.688457 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.703691 4728 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ww7sf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.703777 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" podUID="bdb06213-4bce-43d5-b16f-0bc09dc118fe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.705984 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8" podStartSLOduration=128.70596011 podStartE2EDuration="2m8.70596011s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.698263427 +0000 UTC m=+154.840967812" watchObservedRunningTime="2026-02-04 11:30:05.70596011 +0000 UTC m=+154.848664495" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.714362 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" event={"ID":"1f802986-f97c-4813-9aec-d48d43eeedae","Type":"ContainerStarted","Data":"03bbccb435d81e34eb53d820795cb49e3c3f44936a018676be0b3759ebfd4e7a"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.714440 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" event={"ID":"1f802986-f97c-4813-9aec-d48d43eeedae","Type":"ContainerStarted","Data":"cabe3d18b406b9d461bbec3cd72cbd198bb9490fd0d615086be40ab080593797"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.716848 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.742144 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" event={"ID":"ccba369f-e378-4f2d-b733-f658edbd6c99","Type":"ContainerStarted","Data":"a899128823d06698bd7beddb1142b2775b2a581b8dcbbae5ff7c6a89ea1536f5"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.747650 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:05 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:05 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:05 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.747721 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.754970 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4ckr" event={"ID":"86a5137c-eb55-438a-8e8d-99f2a2d4bf48","Type":"ContainerStarted","Data":"eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.762059 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dbmq2" event={"ID":"031aab71-b01b-4173-a760-dc26e36374ae","Type":"ContainerStarted","Data":"51f6cffc9b427fea628b3bd4f5e54395b777edfef2524774c24f909e9b5b0517"} Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.771812 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rg87q" podStartSLOduration=128.771789013 podStartE2EDuration="2m8.771789013s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.732585885 +0000 UTC m=+154.875290270" watchObservedRunningTime="2026-02-04 11:30:05.771789013 +0000 UTC m=+154.914493418" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.777816 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-csv4c" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.785589 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:05 crc kubenswrapper[4728]: E0204 11:30:05.788000 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:06.287937941 +0000 UTC m=+155.430642326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.816117 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" podStartSLOduration=128.816098957 podStartE2EDuration="2m8.816098957s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.774068604 +0000 UTC m=+154.916772989" watchObservedRunningTime="2026-02-04 11:30:05.816098957 +0000 UTC m=+154.958803342" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.816767 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8jfp4" podStartSLOduration=128.816745374 podStartE2EDuration="2m8.816745374s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.815768578 +0000 UTC m=+154.958472983" watchObservedRunningTime="2026-02-04 11:30:05.816745374 +0000 UTC m=+154.959449759" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.854838 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-c4ckr" podStartSLOduration=128.854824323 podStartE2EDuration="2m8.854824323s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:05.853428246 +0000 UTC m=+154.996132631" watchObservedRunningTime="2026-02-04 11:30:05.854824323 +0000 UTC m=+154.997528698" Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.887184 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:05 crc kubenswrapper[4728]: E0204 11:30:05.890591 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:06.390574849 +0000 UTC m=+155.533279224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:05 crc kubenswrapper[4728]: I0204 11:30:05.990546 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:05 crc kubenswrapper[4728]: E0204 11:30:05.991156 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:06.491140373 +0000 UTC m=+155.633844758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.092809 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:06 crc kubenswrapper[4728]: E0204 11:30:06.093183 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:06.593171655 +0000 UTC m=+155.735876040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.194135 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:06 crc kubenswrapper[4728]: E0204 11:30:06.194494 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:06.694477537 +0000 UTC m=+155.837181922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.296205 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:06 crc kubenswrapper[4728]: E0204 11:30:06.296854 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:06.796839488 +0000 UTC m=+155.939543873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.366032 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-04 11:25:05 +0000 UTC, rotation deadline is 2026-12-07 04:53:57.547469951 +0000 UTC Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.366068 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7337h23m51.181404729s for next certificate rotation Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.399281 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:06 crc kubenswrapper[4728]: E0204 11:30:06.405446 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:06.905418204 +0000 UTC m=+156.048122729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.500877 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:06 crc kubenswrapper[4728]: E0204 11:30:06.501363 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.001348994 +0000 UTC m=+156.144053379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.602343 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:06 crc kubenswrapper[4728]: E0204 11:30:06.602522 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.102505714 +0000 UTC m=+156.245210099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.602601 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:06 crc kubenswrapper[4728]: E0204 11:30:06.602902 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.102895144 +0000 UTC m=+156.245599529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.697090 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:06 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:06 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:06 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.697423 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.704371 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:06 crc kubenswrapper[4728]: E0204 11:30:06.704543 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.204511934 +0000 UTC m=+156.347216319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.704641 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:06 crc kubenswrapper[4728]: E0204 11:30:06.704956 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.204943107 +0000 UTC m=+156.347647492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.801048 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" event={"ID":"083fbf33-a605-4fbc-8bef-6ad1b73a8059","Type":"ContainerStarted","Data":"a8589304ef651b90fde9fda4d46b2b9698868a84b5e4d1e74ce728288e8b7627"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.805817 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:06 crc kubenswrapper[4728]: E0204 11:30:06.806415 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.306376242 +0000 UTC m=+156.449080627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.818263 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" event={"ID":"bd76fb4c-1a31-412e-ae24-5c798e86178e","Type":"ContainerStarted","Data":"c8801d07c19407d7ed81efc33e0ccf86c5cc790752ba8071ac1c99060991a307"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.843259 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" event={"ID":"87039d42-443e-40f7-abe1-a6462556cc32","Type":"ContainerStarted","Data":"2182e88d72c5a8b19bed54b7941a9aed00a02b7b54a3838cdd292394be7499ea"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.844625 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.852907 4728 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-cmjx5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.852987 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" podUID="87039d42-443e-40f7-abe1-a6462556cc32" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.853848 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" event={"ID":"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa","Type":"ContainerStarted","Data":"b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.853885 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" event={"ID":"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa","Type":"ContainerStarted","Data":"ba41ea138a121c8f684829ed789c0688e5bfb6ca42b01bbbc183a35b92ee49c4"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.855440 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.855590 4728 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2v2s5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.855640 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" podUID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.893458 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bnrcg" podStartSLOduration=129.893434678 podStartE2EDuration="2m9.893434678s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:06.847726488 +0000 UTC m=+155.990430873" watchObservedRunningTime="2026-02-04 11:30:06.893434678 +0000 UTC m=+156.036139073" Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.893909 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" event={"ID":"f4f6aa38-43e0-4d04-a3a9-12b046d30937","Type":"ContainerStarted","Data":"f82d1775737bc61fd86fb32d5c5b91178ce90867c7b07e2103efd4238d2c07b0"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.894662 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.910864 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:06 crc kubenswrapper[4728]: E0204 11:30:06.911991 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.41197687 +0000 UTC m=+156.554681255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.925119 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" podStartSLOduration=129.925099917 podStartE2EDuration="2m9.925099917s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:06.924256514 +0000 UTC m=+156.066960899" watchObservedRunningTime="2026-02-04 11:30:06.925099917 +0000 UTC m=+156.067804302" Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.926674 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" podStartSLOduration=130.926661588 podStartE2EDuration="2m10.926661588s" podCreationTimestamp="2026-02-04 11:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:06.899613192 +0000 UTC m=+156.042317587" watchObservedRunningTime="2026-02-04 11:30:06.926661588 +0000 UTC m=+156.069365983" Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.927773 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" event={"ID":"33b2bdb9-5bcb-4977-8722-3d2fa6f8e291","Type":"ContainerStarted","Data":"4d0675f96da585074c55edecea08d39c3dc7bb3986a31443dcfe6448d2c12d9f"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.932221 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" event={"ID":"052783b7-76c5-4c69-bc04-72230f147ee4","Type":"ContainerStarted","Data":"9645367bc50eb89816eeade809245291f525bfd7b2430956d04657d6ef4441fb"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.932259 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" event={"ID":"052783b7-76c5-4c69-bc04-72230f147ee4","Type":"ContainerStarted","Data":"d54f55514b0f9ca21495c412c5758457aa2deae25f11f5a065811d187e4b58f6"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.945322 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" podStartSLOduration=129.945299051 podStartE2EDuration="2m9.945299051s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:06.943528535 +0000 UTC m=+156.086232940" watchObservedRunningTime="2026-02-04 11:30:06.945299051 +0000 UTC m=+156.088003446" Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.967941 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-27mqj" podStartSLOduration=129.967908401 podStartE2EDuration="2m9.967908401s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:06.965353693 +0000 UTC m=+156.108058088" watchObservedRunningTime="2026-02-04 11:30:06.967908401 +0000 UTC m=+156.110612786" Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.969284 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" event={"ID":"d800f63b-2465-4553-aa78-99fff8f484bb","Type":"ContainerStarted","Data":"a390d0391a5f04f1b39a8b6170ce85f7bb4f9cc0e457acdaafb0ea159f620355"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.987603 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" event={"ID":"0e3d50ac-921a-4d44-b6ed-cfc7709f4863","Type":"ContainerStarted","Data":"58887a2574102600bd7db80057bf6e1f50eeab066c1c16486a80f5afd3b3a004"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.987679 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" event={"ID":"0e3d50ac-921a-4d44-b6ed-cfc7709f4863","Type":"ContainerStarted","Data":"1dfef1735db447b0e392e7b723ff5058fdd28ddab8f26534f5c3ea63148942d9"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.987691 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" event={"ID":"0e3d50ac-921a-4d44-b6ed-cfc7709f4863","Type":"ContainerStarted","Data":"dc17aaaf38356b31037fbe281560d69022cbd4015b45a055c633f57a8e926758"} Feb 04 11:30:06 crc kubenswrapper[4728]: I0204 11:30:06.992182 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" podStartSLOduration=6.992153683 podStartE2EDuration="6.992153683s" podCreationTimestamp="2026-02-04 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:06.991575977 +0000 UTC m=+156.134280372" watchObservedRunningTime="2026-02-04 11:30:06.992153683 +0000 UTC m=+156.134858068" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.003353 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5695c" event={"ID":"51ce2604-c544-4207-8f53-daea97729643","Type":"ContainerStarted","Data":"4db36522f0795752393cd87990201debdd4d36908354578018e1cf70f470ebc7"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.003434 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5695c" event={"ID":"51ce2604-c544-4207-8f53-daea97729643","Type":"ContainerStarted","Data":"21b30b491c6cb92264651986eae2801ababc1da19a470202ca6e97ed053755ab"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.017437 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:07 crc kubenswrapper[4728]: E0204 11:30:07.019119 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.519062355 +0000 UTC m=+156.661766740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.027590 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bvvbh" podStartSLOduration=130.02754051 podStartE2EDuration="2m10.02754051s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.021960582 +0000 UTC m=+156.164664967" watchObservedRunningTime="2026-02-04 11:30:07.02754051 +0000 UTC m=+156.170244895" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.041309 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" event={"ID":"7b16e96d-9a89-4901-af25-b15ac64ffe90","Type":"ContainerStarted","Data":"e0dcdcce5d775afce75a7b2c55edc4487e838d89ac59a3e050ddfb273c022875"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.041359 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" event={"ID":"7b16e96d-9a89-4901-af25-b15ac64ffe90","Type":"ContainerStarted","Data":"b65982d21ba9bba7ac9384b0f4a998b755aef240745d47ad25d8484bd567b06d"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.042462 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.075548 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4" event={"ID":"07b49a63-3679-433b-8b24-d2322125ccc9","Type":"ContainerStarted","Data":"bb7b0ac5c3d851228696d133efa057377e308e1d29921529b138344954c4dffe"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.077462 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cvhf8" event={"ID":"cb2c7326-dd1e-481c-ad3f-c8f884d636b1","Type":"ContainerStarted","Data":"8a5fbe90c3b29c7fb20b452a057c888bb8349dc3cc669dedeb70f6b3cbe2ca1c"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.101539 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" event={"ID":"0cef0962-392a-46f0-9bc4-14e6547d36c4","Type":"ContainerStarted","Data":"6173b841a4f5f6d55f76327816f02c6d5da7a5ca4949629ccbea14b703c5219f"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.122515 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:07 crc kubenswrapper[4728]: E0204 11:30:07.124012 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.623998874 +0000 UTC m=+156.766703259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.124647 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" event={"ID":"9e0e9f45-e348-47ad-8de3-b3a1d60eeeac","Type":"ContainerStarted","Data":"e0a70f729f1467d31214f331cbf1e5815dafd862236f3cff4f4ddf108d3ec523"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.130508 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" podStartSLOduration=130.130492067 podStartE2EDuration="2m10.130492067s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.129572662 +0000 UTC m=+156.272277047" watchObservedRunningTime="2026-02-04 11:30:07.130492067 +0000 UTC m=+156.273196442" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.191003 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-njv76" event={"ID":"69941123-11bc-4cf2-8a71-665627a08a99","Type":"ContainerStarted","Data":"614203772f70d9047f210d248c0b2b684071cc7f24f88832c3635426a45f9702"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.224203 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:07 crc kubenswrapper[4728]: E0204 11:30:07.225396 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.725380589 +0000 UTC m=+156.868084974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.234477 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" event={"ID":"a4e9bf47-202b-4206-8758-a446e86d7a6b","Type":"ContainerStarted","Data":"fde9b25c8ae1d34dd3bd0a688aea517a85a41aa9e7077b6cdb570e59b66a8d6e"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.244098 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" event={"ID":"f58ac097-02b6-4c5c-a670-62f8fcdc5853","Type":"ContainerStarted","Data":"bac184273330b28620ebf42d2a6c76bba03d916dfa037a635e7a265d6a0f1e01"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.249051 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-4k2zf" podStartSLOduration=130.249031435 podStartE2EDuration="2m10.249031435s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.249019635 +0000 UTC m=+156.391724020" watchObservedRunningTime="2026-02-04 11:30:07.249031435 +0000 UTC m=+156.391735820" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.267336 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" event={"ID":"d422575c-0503-4a8b-aa39-f8131db07fbd","Type":"ContainerStarted","Data":"8104759e9ead7f001fbffc7609a3472c579f547c40ce70c2d69278ec983618a9"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.267400 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" event={"ID":"d422575c-0503-4a8b-aa39-f8131db07fbd","Type":"ContainerStarted","Data":"4443f6bfbb3bdd3ac551eeebf32f1a05eef6a6b21e95bb8a8d71848d16db483c"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.268056 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.274876 4728 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zchf4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.274930 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" podUID="d422575c-0503-4a8b-aa39-f8131db07fbd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.331530 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.332041 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" event={"ID":"ca8423fa-0c80-4400-9a3b-5bab042ae353","Type":"ContainerStarted","Data":"114bc7342b38aaebd9618debd5161f7abc944d131ed669ab02093472d9392e83"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.332079 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" event={"ID":"ca8423fa-0c80-4400-9a3b-5bab042ae353","Type":"ContainerStarted","Data":"91da04b4f291d8e173d349d9f8c8cbb82df0120bd2c8a0d0f6f1d8b2bb814cae"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.332951 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.333650 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d96m4" podStartSLOduration=130.333631746 podStartE2EDuration="2m10.333631746s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.306506547 +0000 UTC m=+156.449210932" watchObservedRunningTime="2026-02-04 11:30:07.333631746 +0000 UTC m=+156.476336131" Feb 04 11:30:07 crc kubenswrapper[4728]: E0204 11:30:07.333720 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.833708458 +0000 UTC m=+156.976412843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.333851 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" podStartSLOduration=130.333844792 podStartE2EDuration="2m10.333844792s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.332951478 +0000 UTC m=+156.475655873" watchObservedRunningTime="2026-02-04 11:30:07.333844792 +0000 UTC m=+156.476549177" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.352715 4728 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-k8snv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.352791 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" podUID="ca8423fa-0c80-4400-9a3b-5bab042ae353" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.363838 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" event={"ID":"a7e87130-d830-4391-a2b3-40d2b63149e2","Type":"ContainerStarted","Data":"6fa8808d35ed508f5198acdfe473604b36e5115958f2a701556590a73491b426"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.412854 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" event={"ID":"1f802986-f97c-4813-9aec-d48d43eeedae","Type":"ContainerStarted","Data":"dd41befde7518467188a3c8b95e8eddd3293835e8152767e36d78fcac54638c8"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.432714 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:07 crc kubenswrapper[4728]: E0204 11:30:07.433258 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:07.933242994 +0000 UTC m=+157.075947379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.440916 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" event={"ID":"9dda38b8-011e-4cde-a88e-abdab857fe2f","Type":"ContainerStarted","Data":"421decb20fa03cfe66aa7a679026ca7b3548e180d447c5c77a6260a92f8ce004"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.466828 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" podStartSLOduration=130.466811873 podStartE2EDuration="2m10.466811873s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.406925207 +0000 UTC m=+156.549629592" watchObservedRunningTime="2026-02-04 11:30:07.466811873 +0000 UTC m=+156.609516258" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.467322 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" podStartSLOduration=130.467318346 podStartE2EDuration="2m10.467318346s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.467197653 +0000 UTC m=+156.609902038" watchObservedRunningTime="2026-02-04 11:30:07.467318346 +0000 UTC m=+156.610022731" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.472146 4728 generic.go:334] "Generic (PLEG): container finished" podID="a16ecbae-a304-444d-b36c-c3e82a1332a1" containerID="39f79dca8fdad2b24d8a284b4adf9188ef775a1c75beedc4c58ccec5c781303e" exitCode=0 Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.472231 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" event={"ID":"a16ecbae-a304-444d-b36c-c3e82a1332a1","Type":"ContainerDied","Data":"39f79dca8fdad2b24d8a284b4adf9188ef775a1c75beedc4c58ccec5c781303e"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.486622 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" event={"ID":"43618563-306c-4371-a92b-4baf4aa7e352","Type":"ContainerStarted","Data":"23189a9e547f0e9fc05d8945845c89c3070fcda4b8bc22d530a14a8a1ae56fd9"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.486673 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" event={"ID":"43618563-306c-4371-a92b-4baf4aa7e352","Type":"ContainerStarted","Data":"82013da1ea183c5e97ad3dd096e0f908b1aaac59338c23473f6d405d66ce87a6"} Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.498686 4728 patch_prober.go:28] interesting pod/downloads-7954f5f757-l4qn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.498766 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l4qn4" podUID="20be2be6-dc74-4404-b883-1ad4af94512b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.499380 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.512031 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" podStartSLOduration=131.51201179 podStartE2EDuration="2m11.51201179s" podCreationTimestamp="2026-02-04 11:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.509136564 +0000 UTC m=+156.651840969" watchObservedRunningTime="2026-02-04 11:30:07.51201179 +0000 UTC m=+156.654716175" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.534556 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:07 crc kubenswrapper[4728]: E0204 11:30:07.536188 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:08.03617686 +0000 UTC m=+157.178881245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.538451 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.574008 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-njv76" podStartSLOduration=7.573987941 podStartE2EDuration="7.573987941s" podCreationTimestamp="2026-02-04 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.572413369 +0000 UTC m=+156.715117754" watchObservedRunningTime="2026-02-04 11:30:07.573987941 +0000 UTC m=+156.716692326" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.606494 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.606887 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.638623 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.640344 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" podStartSLOduration=130.640327168 podStartE2EDuration="2m10.640327168s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.638185741 +0000 UTC m=+156.780890266" watchObservedRunningTime="2026-02-04 11:30:07.640327168 +0000 UTC m=+156.783031553" Feb 04 11:30:07 crc kubenswrapper[4728]: E0204 11:30:07.640666 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:08.140648527 +0000 UTC m=+157.283352912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.696716 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" podStartSLOduration=130.696701191 podStartE2EDuration="2m10.696701191s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.696088385 +0000 UTC m=+156.838792760" watchObservedRunningTime="2026-02-04 11:30:07.696701191 +0000 UTC m=+156.839405576" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.713298 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:07 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:07 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:07 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.713671 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.743243 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-r576m" podStartSLOduration=130.743223053 podStartE2EDuration="2m10.743223053s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:07.732091968 +0000 UTC m=+156.874796363" watchObservedRunningTime="2026-02-04 11:30:07.743223053 +0000 UTC m=+156.885927438" Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.744404 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:07 crc kubenswrapper[4728]: E0204 11:30:07.744729 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:08.244718232 +0000 UTC m=+157.387422617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.847295 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:07 crc kubenswrapper[4728]: E0204 11:30:07.847604 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:08.347588607 +0000 UTC m=+157.490292992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:07 crc kubenswrapper[4728]: I0204 11:30:07.948579 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:07 crc kubenswrapper[4728]: E0204 11:30:07.949382 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:08.449363723 +0000 UTC m=+157.592068108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.050045 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:08 crc kubenswrapper[4728]: E0204 11:30:08.050409 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:08.550391328 +0000 UTC m=+157.693095713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.153052 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:08 crc kubenswrapper[4728]: E0204 11:30:08.153400 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:08.653384215 +0000 UTC m=+157.796088600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.254963 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:08 crc kubenswrapper[4728]: E0204 11:30:08.255349 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:08.755329195 +0000 UTC m=+157.898033610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.356894 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:08 crc kubenswrapper[4728]: E0204 11:30:08.357235 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:08.857223274 +0000 UTC m=+157.999927659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.457861 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:08 crc kubenswrapper[4728]: E0204 11:30:08.458604 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:08.958584268 +0000 UTC m=+158.101288653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.491140 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" event={"ID":"f58ac097-02b6-4c5c-a670-62f8fcdc5853","Type":"ContainerStarted","Data":"9ab7e3fabe69646635d4ac96ee738fe143299996b49f1a75cc7d4afb20aeab81"} Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.492228 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d98w7" event={"ID":"9dda38b8-011e-4cde-a88e-abdab857fe2f","Type":"ContainerStarted","Data":"2f609aadf6e65a57c597b81afa8ba18b4b1f73d779f9cf3ff545be1bb643902a"} Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.495871 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" event={"ID":"052783b7-76c5-4c69-bc04-72230f147ee4","Type":"ContainerStarted","Data":"df6b8fcf55c758a5bfb6ea12b82ca2bf8c84c0a2681b0814d0ac57e3c3a55e3d"} Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.497966 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" event={"ID":"083fbf33-a605-4fbc-8bef-6ad1b73a8059","Type":"ContainerStarted","Data":"392252aed98d032bdddb613dbe05a7318069199ddbbf724c1f94bb8bc3044be3"} Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.500411 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" event={"ID":"a16ecbae-a304-444d-b36c-c3e82a1332a1","Type":"ContainerStarted","Data":"34ab2052cf53b368c0ca738d43c8638442fde0d2e1cbf6d3ef88d173ef9f853c"} Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.501977 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" event={"ID":"a7e87130-d830-4391-a2b3-40d2b63149e2","Type":"ContainerStarted","Data":"0311032b4b1eabe6dc71d04999b097ebd80e5a86655dbafa78b7fd8a011ca3ed"} Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.502004 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" event={"ID":"a7e87130-d830-4391-a2b3-40d2b63149e2","Type":"ContainerStarted","Data":"fb32fbae938c63e00fac5769abd1f8a1be6f315728aef4df909dd776e07f72d7"} Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.503426 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n58bn" event={"ID":"0cef0962-392a-46f0-9bc4-14e6547d36c4","Type":"ContainerStarted","Data":"c6ac565f0f0f55b41afec3ecda83a5b784c90f2c70c822811caf81b10454609a"} Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.505727 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5695c" event={"ID":"51ce2604-c544-4207-8f53-daea97729643","Type":"ContainerStarted","Data":"bc6e403011c0db230b0f44a942f538f472306266334e12974f2cea23cac8a8cc"} Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.505768 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5695c" Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.508308 4728 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2v2s5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.508339 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" podUID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.510888 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.519676 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.536464 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k8snv" Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.538957 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zchf4" Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.559783 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:08 crc kubenswrapper[4728]: E0204 11:30:08.560189 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.060176939 +0000 UTC m=+158.202881324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.660342 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:08 crc kubenswrapper[4728]: E0204 11:30:08.661944 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.161927223 +0000 UTC m=+158.304631608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.692992 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-skkqn" podStartSLOduration=131.692969646 podStartE2EDuration="2m11.692969646s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:08.618867934 +0000 UTC m=+157.761572319" watchObservedRunningTime="2026-02-04 11:30:08.692969646 +0000 UTC m=+157.835674031" Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.702008 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:08 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:08 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:08 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.702062 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.765407 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:08 crc kubenswrapper[4728]: E0204 11:30:08.765677 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.265665421 +0000 UTC m=+158.408369806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.777549 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5695c" podStartSLOduration=8.777532035 podStartE2EDuration="8.777532035s" podCreationTimestamp="2026-02-04 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:08.775998205 +0000 UTC m=+157.918702590" watchObservedRunningTime="2026-02-04 11:30:08.777532035 +0000 UTC m=+157.920236420" Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.866163 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:08 crc kubenswrapper[4728]: E0204 11:30:08.866419 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.366393588 +0000 UTC m=+158.509097973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.866723 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:08 crc kubenswrapper[4728]: E0204 11:30:08.867065 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.367053396 +0000 UTC m=+158.509757781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.955696 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qdvrf" podStartSLOduration=131.955679933 podStartE2EDuration="2m11.955679933s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:08.955087128 +0000 UTC m=+158.097791513" watchObservedRunningTime="2026-02-04 11:30:08.955679933 +0000 UTC m=+158.098384308" Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.957841 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" podStartSLOduration=131.9578349 podStartE2EDuration="2m11.9578349s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:08.898324584 +0000 UTC m=+158.041028969" watchObservedRunningTime="2026-02-04 11:30:08.9578349 +0000 UTC m=+158.100539285" Feb 04 11:30:08 crc kubenswrapper[4728]: I0204 11:30:08.968113 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:08 crc kubenswrapper[4728]: E0204 11:30:08.968486 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.468470361 +0000 UTC m=+158.611174746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.060539 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6rlf" podStartSLOduration=132.060522539 podStartE2EDuration="2m12.060522539s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:09.058835395 +0000 UTC m=+158.201539780" watchObservedRunningTime="2026-02-04 11:30:09.060522539 +0000 UTC m=+158.203226924" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.068945 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.069195 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.569185699 +0000 UTC m=+158.711890084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.169544 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.169908 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.669890706 +0000 UTC m=+158.812595091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.270543 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.270876 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.77086478 +0000 UTC m=+158.913569165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.371416 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.371601 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.871576637 +0000 UTC m=+159.014281022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.371781 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.372111 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.872100172 +0000 UTC m=+159.014804557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.473396 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.473973 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.973958788 +0000 UTC m=+159.116663173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.475813 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.476153 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:09.976142477 +0000 UTC m=+159.118846862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.511310 4728 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lbdwv container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.511355 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" podUID="43618563-306c-4371-a92b-4baf4aa7e352" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.532554 4728 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6hr78 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 04 11:30:09 crc kubenswrapper[4728]: [+]log ok Feb 04 11:30:09 crc kubenswrapper[4728]: [+]etcd ok Feb 04 11:30:09 crc kubenswrapper[4728]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 04 11:30:09 crc kubenswrapper[4728]: [+]poststarthook/generic-apiserver-start-informers ok Feb 04 11:30:09 crc kubenswrapper[4728]: [+]poststarthook/max-in-flight-filter ok Feb 04 11:30:09 crc kubenswrapper[4728]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 04 11:30:09 crc kubenswrapper[4728]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 04 11:30:09 crc kubenswrapper[4728]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 04 11:30:09 crc kubenswrapper[4728]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 04 11:30:09 crc kubenswrapper[4728]: [+]poststarthook/project.openshift.io-projectcache ok Feb 04 11:30:09 crc kubenswrapper[4728]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 04 11:30:09 crc kubenswrapper[4728]: [+]poststarthook/openshift.io-startinformers ok Feb 04 11:30:09 crc kubenswrapper[4728]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 04 11:30:09 crc kubenswrapper[4728]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 04 11:30:09 crc kubenswrapper[4728]: livez check failed Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.532649 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" podUID="a4e9bf47-202b-4206-8758-a446e86d7a6b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.536105 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" event={"ID":"f58ac097-02b6-4c5c-a670-62f8fcdc5853","Type":"ContainerStarted","Data":"211431a0a52d577a9a1d3f1104578be8317f9919e582533aa1ca22c1d7079e92"} Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.536152 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" event={"ID":"f58ac097-02b6-4c5c-a670-62f8fcdc5853","Type":"ContainerStarted","Data":"48ba8795de078e5b9099324a94ae168efa26138f890976f4261bfc333afae24d"} Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.537744 4728 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2v2s5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.537801 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" podUID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.578369 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.578963 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.078946189 +0000 UTC m=+159.221650574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.680924 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.684174 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.184161126 +0000 UTC m=+159.326865501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.695933 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:09 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:09 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:09 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.695983 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.782175 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.782370 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.282342266 +0000 UTC m=+159.425046651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.782447 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.782809 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.282801028 +0000 UTC m=+159.425505413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.823245 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sszrr"] Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.824845 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.826679 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.883611 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.883831 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.383786472 +0000 UTC m=+159.526490857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.883944 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.884272 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.384264774 +0000 UTC m=+159.526969149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.896192 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sszrr"] Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.936917 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skj7q" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.954920 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lbdwv" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.984661 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.984875 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.484849278 +0000 UTC m=+159.627553673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.984955 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-catalog-content\") pod \"community-operators-sszrr\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.984988 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zwln\" (UniqueName: \"kubernetes.io/projected/68c43db7-d07e-45eb-bd58-6651d8a0e342-kube-api-access-6zwln\") pod \"community-operators-sszrr\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.985154 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:09 crc kubenswrapper[4728]: I0204 11:30:09.985260 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-utilities\") pod \"community-operators-sszrr\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:30:09 crc kubenswrapper[4728]: E0204 11:30:09.985470 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.485458035 +0000 UTC m=+159.628162420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.006533 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w94hf"] Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.007583 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.009321 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.047464 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w94hf"] Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.085882 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:10 crc kubenswrapper[4728]: E0204 11:30:10.086142 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.586107681 +0000 UTC m=+159.728812066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.086203 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-utilities\") pod \"community-operators-sszrr\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.086237 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-catalog-content\") pod \"certified-operators-w94hf\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.086259 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-catalog-content\") pod \"community-operators-sszrr\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.086279 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zwln\" (UniqueName: \"kubernetes.io/projected/68c43db7-d07e-45eb-bd58-6651d8a0e342-kube-api-access-6zwln\") pod \"community-operators-sszrr\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.086307 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-utilities\") pod \"certified-operators-w94hf\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.086377 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.086404 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmts\" (UniqueName: \"kubernetes.io/projected/b13c4294-fd84-478b-b4a0-321a5d706499-kube-api-access-csmts\") pod \"certified-operators-w94hf\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:30:10 crc kubenswrapper[4728]: E0204 11:30:10.086717 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.586705916 +0000 UTC m=+159.729410301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.086800 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-utilities\") pod \"community-operators-sszrr\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.086910 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-catalog-content\") pod \"community-operators-sszrr\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.109950 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zwln\" (UniqueName: \"kubernetes.io/projected/68c43db7-d07e-45eb-bd58-6651d8a0e342-kube-api-access-6zwln\") pod \"community-operators-sszrr\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.138262 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.187259 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.187406 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-catalog-content\") pod \"certified-operators-w94hf\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:30:10 crc kubenswrapper[4728]: E0204 11:30:10.187484 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.687453074 +0000 UTC m=+159.830157479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.187614 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-utilities\") pod \"certified-operators-w94hf\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.187768 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-catalog-content\") pod \"certified-operators-w94hf\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.187790 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.187825 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csmts\" (UniqueName: \"kubernetes.io/projected/b13c4294-fd84-478b-b4a0-321a5d706499-kube-api-access-csmts\") pod \"certified-operators-w94hf\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.188065 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-utilities\") pod \"certified-operators-w94hf\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:30:10 crc kubenswrapper[4728]: E0204 11:30:10.188310 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.688299567 +0000 UTC m=+159.831003952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.208303 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g9ct6"] Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.209350 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.233516 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csmts\" (UniqueName: \"kubernetes.io/projected/b13c4294-fd84-478b-b4a0-321a5d706499-kube-api-access-csmts\") pod \"certified-operators-w94hf\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.255368 4728 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.290721 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:10 crc kubenswrapper[4728]: E0204 11:30:10.290956 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.790925255 +0000 UTC m=+159.933629640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.291353 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lpp\" (UniqueName: \"kubernetes.io/projected/c17fa247-ec01-449d-9888-ab485b1496a6-kube-api-access-p5lpp\") pod \"community-operators-g9ct6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.291396 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-catalog-content\") pod \"community-operators-g9ct6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.291538 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-utilities\") pod \"community-operators-g9ct6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.291623 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:10 crc kubenswrapper[4728]: E0204 11:30:10.292109 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.792070235 +0000 UTC m=+159.934774620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.322916 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.341128 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9ct6"] Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.397157 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.397462 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-utilities\") pod \"community-operators-g9ct6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.397519 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-catalog-content\") pod \"community-operators-g9ct6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.397541 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5lpp\" (UniqueName: \"kubernetes.io/projected/c17fa247-ec01-449d-9888-ab485b1496a6-kube-api-access-p5lpp\") pod \"community-operators-g9ct6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:30:10 crc kubenswrapper[4728]: E0204 11:30:10.397913 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:10.897898657 +0000 UTC m=+160.040603042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.398291 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-utilities\") pod \"community-operators-g9ct6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.402166 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-catalog-content\") pod \"community-operators-g9ct6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.419663 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vrmfp"] Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.420814 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.433703 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5lpp\" (UniqueName: \"kubernetes.io/projected/c17fa247-ec01-449d-9888-ab485b1496a6-kube-api-access-p5lpp\") pod \"community-operators-g9ct6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.455080 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrmfp"] Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.500442 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-catalog-content\") pod \"certified-operators-vrmfp\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.500521 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.500543 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6twpj\" (UniqueName: \"kubernetes.io/projected/eef08543-a746-4aad-a4be-5ee0bb7464a8-kube-api-access-6twpj\") pod \"certified-operators-vrmfp\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.500563 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-utilities\") pod \"certified-operators-vrmfp\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:30:10 crc kubenswrapper[4728]: E0204 11:30:10.500854 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:11.000843124 +0000 UTC m=+160.143547499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.558209 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.580563 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" event={"ID":"f58ac097-02b6-4c5c-a670-62f8fcdc5853","Type":"ContainerStarted","Data":"458506a2f11792b373edad126830fb975368f4c7ea2c9b7a3b7cbd7ec5478e83"} Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.603674 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.604296 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-catalog-content\") pod \"certified-operators-vrmfp\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.604388 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6twpj\" (UniqueName: \"kubernetes.io/projected/eef08543-a746-4aad-a4be-5ee0bb7464a8-kube-api-access-6twpj\") pod \"certified-operators-vrmfp\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.604418 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-utilities\") pod \"certified-operators-vrmfp\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.604886 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-utilities\") pod \"certified-operators-vrmfp\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:30:10 crc kubenswrapper[4728]: E0204 11:30:10.604974 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-04 11:30:11.104954511 +0000 UTC m=+160.247658896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.605220 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-catalog-content\") pod \"certified-operators-vrmfp\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.612922 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-x6j2r" podStartSLOduration=10.612900172 podStartE2EDuration="10.612900172s" podCreationTimestamp="2026-02-04 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:10.610138368 +0000 UTC m=+159.752842773" watchObservedRunningTime="2026-02-04 11:30:10.612900172 +0000 UTC m=+159.755604557" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.662656 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6twpj\" (UniqueName: \"kubernetes.io/projected/eef08543-a746-4aad-a4be-5ee0bb7464a8-kube-api-access-6twpj\") pod \"certified-operators-vrmfp\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.702123 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sszrr"] Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.703004 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:10 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:10 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:10 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.703042 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.706176 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:10 crc kubenswrapper[4728]: E0204 11:30:10.709142 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-04 11:30:11.20912331 +0000 UTC m=+160.351827755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57d49" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.727924 4728 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-04T11:30:10.255397784Z","Handler":null,"Name":""} Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.740821 4728 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.740863 4728 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.764546 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.807795 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.812292 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.830734 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w94hf"] Feb 04 11:30:10 crc kubenswrapper[4728]: W0204 11:30:10.844602 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb13c4294_fd84_478b_b4a0_321a5d706499.slice/crio-5c4b9fd2740ade93272d7c58538afef4c58704d38cbdcb18179b49215707a553 WatchSource:0}: Error finding container 5c4b9fd2740ade93272d7c58538afef4c58704d38cbdcb18179b49215707a553: Status 404 returned error can't find the container with id 5c4b9fd2740ade93272d7c58538afef4c58704d38cbdcb18179b49215707a553 Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.896201 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9ct6"] Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.909608 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:10 crc kubenswrapper[4728]: W0204 11:30:10.917190 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc17fa247_ec01_449d_9888_ab485b1496a6.slice/crio-d10bc983a4d73a42e04a5aa0248ad202a45758d9f12ff8069ba28d2d5c1689e3 WatchSource:0}: Error finding container d10bc983a4d73a42e04a5aa0248ad202a45758d9f12ff8069ba28d2d5c1689e3: Status 404 returned error can't find the container with id d10bc983a4d73a42e04a5aa0248ad202a45758d9f12ff8069ba28d2d5c1689e3 Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.919707 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.919762 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:10 crc kubenswrapper[4728]: I0204 11:30:10.971704 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57d49\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.024936 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrmfp"] Feb 04 11:30:11 crc kubenswrapper[4728]: W0204 11:30:11.081867 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef08543_a746_4aad_a4be_5ee0bb7464a8.slice/crio-941c3c89e007e76f140ec70d86d1db10c974dd815cd11385a35b80b64624f374 WatchSource:0}: Error finding container 941c3c89e007e76f140ec70d86d1db10c974dd815cd11385a35b80b64624f374: Status 404 returned error can't find the container with id 941c3c89e007e76f140ec70d86d1db10c974dd815cd11385a35b80b64624f374 Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.173742 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.421858 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57d49"] Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.563010 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.599885 4728 generic.go:334] "Generic (PLEG): container finished" podID="68c43db7-d07e-45eb-bd58-6651d8a0e342" containerID="17dac3bc2951278f544dc44e35de35cd2ba72913746ac6be89658368e2631982" exitCode=0 Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.599947 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sszrr" event={"ID":"68c43db7-d07e-45eb-bd58-6651d8a0e342","Type":"ContainerDied","Data":"17dac3bc2951278f544dc44e35de35cd2ba72913746ac6be89658368e2631982"} Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.599973 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sszrr" event={"ID":"68c43db7-d07e-45eb-bd58-6651d8a0e342","Type":"ContainerStarted","Data":"d0b9deab00cb5c03964b582b5401403f78f798b9fb7645456c2d1a4bc81e85b0"} Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.602936 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.604241 4728 generic.go:334] "Generic (PLEG): container finished" podID="c17fa247-ec01-449d-9888-ab485b1496a6" containerID="d30dd9606a0f6e055791fb2605cb9c6c21c019d79f75b837a392e38895529982" exitCode=0 Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.604319 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9ct6" event={"ID":"c17fa247-ec01-449d-9888-ab485b1496a6","Type":"ContainerDied","Data":"d30dd9606a0f6e055791fb2605cb9c6c21c019d79f75b837a392e38895529982"} Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.604345 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9ct6" event={"ID":"c17fa247-ec01-449d-9888-ab485b1496a6","Type":"ContainerStarted","Data":"d10bc983a4d73a42e04a5aa0248ad202a45758d9f12ff8069ba28d2d5c1689e3"} Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.608824 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" event={"ID":"4983cdcb-9bb3-41d2-9164-f24ee5753562","Type":"ContainerStarted","Data":"ffe232dd5d099115470ad128bfacdcae2357a5a8f7c8af51d67abd93f690b786"} Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.608879 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" event={"ID":"4983cdcb-9bb3-41d2-9164-f24ee5753562","Type":"ContainerStarted","Data":"42fee2d1dd36558635bb00047da35784f58add06c501a6564a3e8a742cb9e146"} Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.608900 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.611523 4728 generic.go:334] "Generic (PLEG): container finished" podID="eef08543-a746-4aad-a4be-5ee0bb7464a8" containerID="f55bb8f923f25a4e5708acd07bf9892670262ccc30d3d7b999df9bf38b5a51ee" exitCode=0 Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.611606 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrmfp" event={"ID":"eef08543-a746-4aad-a4be-5ee0bb7464a8","Type":"ContainerDied","Data":"f55bb8f923f25a4e5708acd07bf9892670262ccc30d3d7b999df9bf38b5a51ee"} Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.611646 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrmfp" event={"ID":"eef08543-a746-4aad-a4be-5ee0bb7464a8","Type":"ContainerStarted","Data":"941c3c89e007e76f140ec70d86d1db10c974dd815cd11385a35b80b64624f374"} Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.612659 4728 generic.go:334] "Generic (PLEG): container finished" podID="b13c4294-fd84-478b-b4a0-321a5d706499" containerID="43b17b32dfdc64b05ca602e73193c9c641bfb3fc25d1aad22ea5e4e96dc70b6b" exitCode=0 Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.612781 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w94hf" event={"ID":"b13c4294-fd84-478b-b4a0-321a5d706499","Type":"ContainerDied","Data":"43b17b32dfdc64b05ca602e73193c9c641bfb3fc25d1aad22ea5e4e96dc70b6b"} Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.612816 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w94hf" event={"ID":"b13c4294-fd84-478b-b4a0-321a5d706499","Type":"ContainerStarted","Data":"5c4b9fd2740ade93272d7c58538afef4c58704d38cbdcb18179b49215707a553"} Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.695042 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:11 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:11 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:11 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.695101 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:11 crc kubenswrapper[4728]: I0204 11:30:11.712626 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" podStartSLOduration=134.712605975 podStartE2EDuration="2m14.712605975s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:11.709842762 +0000 UTC m=+160.852547167" watchObservedRunningTime="2026-02-04 11:30:11.712605975 +0000 UTC m=+160.855310360" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.007645 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7djd8"] Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.009309 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.014658 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.029790 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7djd8"] Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.124534 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgk4h\" (UniqueName: \"kubernetes.io/projected/af9c8d19-58ae-479c-8c47-3ce89d9c803c-kube-api-access-hgk4h\") pod \"redhat-marketplace-7djd8\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.124858 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-utilities\") pod \"redhat-marketplace-7djd8\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.124910 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-catalog-content\") pod \"redhat-marketplace-7djd8\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.226457 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-catalog-content\") pod \"redhat-marketplace-7djd8\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.226557 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgk4h\" (UniqueName: \"kubernetes.io/projected/af9c8d19-58ae-479c-8c47-3ce89d9c803c-kube-api-access-hgk4h\") pod \"redhat-marketplace-7djd8\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.226630 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-utilities\") pod \"redhat-marketplace-7djd8\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.227261 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-catalog-content\") pod \"redhat-marketplace-7djd8\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.227578 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-utilities\") pod \"redhat-marketplace-7djd8\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.245653 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgk4h\" (UniqueName: \"kubernetes.io/projected/af9c8d19-58ae-479c-8c47-3ce89d9c803c-kube-api-access-hgk4h\") pod \"redhat-marketplace-7djd8\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.326330 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.405869 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bgxgn"] Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.406769 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.418139 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgxgn"] Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.530792 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjthf\" (UniqueName: \"kubernetes.io/projected/8c5be741-eb53-486a-8af4-1e0b4974ddb7-kube-api-access-qjthf\") pod \"redhat-marketplace-bgxgn\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.531092 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-catalog-content\") pod \"redhat-marketplace-bgxgn\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.531157 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-utilities\") pod \"redhat-marketplace-bgxgn\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.619696 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.636092 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-6hr78" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.632088 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-catalog-content\") pod \"redhat-marketplace-bgxgn\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.639992 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-utilities\") pod \"redhat-marketplace-bgxgn\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.640068 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjthf\" (UniqueName: \"kubernetes.io/projected/8c5be741-eb53-486a-8af4-1e0b4974ddb7-kube-api-access-qjthf\") pod \"redhat-marketplace-bgxgn\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.640814 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-utilities\") pod \"redhat-marketplace-bgxgn\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.641203 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-catalog-content\") pod \"redhat-marketplace-bgxgn\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.666111 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7djd8"] Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.666131 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjthf\" (UniqueName: \"kubernetes.io/projected/8c5be741-eb53-486a-8af4-1e0b4974ddb7-kube-api-access-qjthf\") pod \"redhat-marketplace-bgxgn\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.674255 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.674316 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.677959 4728 patch_prober.go:28] interesting pod/downloads-7954f5f757-l4qn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.678014 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l4qn4" podUID="20be2be6-dc74-4404-b883-1ad4af94512b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.679234 4728 patch_prober.go:28] interesting pod/downloads-7954f5f757-l4qn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.679297 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l4qn4" podUID="20be2be6-dc74-4404-b883-1ad4af94512b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.685801 4728 patch_prober.go:28] interesting pod/console-f9d7485db-c4ckr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.685861 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c4ckr" podUID="86a5137c-eb55-438a-8e8d-99f2a2d4bf48" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.689873 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" event={"ID":"d800f63b-2465-4553-aa78-99fff8f484bb","Type":"ContainerDied","Data":"a390d0391a5f04f1b39a8b6170ce85f7bb4f9cc0e457acdaafb0ea159f620355"} Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.689840 4728 generic.go:334] "Generic (PLEG): container finished" podID="d800f63b-2465-4553-aa78-99fff8f484bb" containerID="a390d0391a5f04f1b39a8b6170ce85f7bb4f9cc0e457acdaafb0ea159f620355" exitCode=0 Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.708798 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:12 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:12 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:12 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.708870 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.733103 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.928463 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.928879 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:12 crc kubenswrapper[4728]: I0204 11:30:12.938189 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.014636 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4r26n"] Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.017214 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.019486 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.121123 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4r26n"] Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.152140 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zng5p\" (UniqueName: \"kubernetes.io/projected/81d54708-f68a-4e0b-b8e4-699a15e89f03-kube-api-access-zng5p\") pod \"redhat-operators-4r26n\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.152232 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-utilities\") pod \"redhat-operators-4r26n\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.152283 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-catalog-content\") pod \"redhat-operators-4r26n\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.253025 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zng5p\" (UniqueName: \"kubernetes.io/projected/81d54708-f68a-4e0b-b8e4-699a15e89f03-kube-api-access-zng5p\") pod \"redhat-operators-4r26n\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.253087 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-utilities\") pod \"redhat-operators-4r26n\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.253143 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-catalog-content\") pod \"redhat-operators-4r26n\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.253935 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-catalog-content\") pod \"redhat-operators-4r26n\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.254106 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-utilities\") pod \"redhat-operators-4r26n\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.277952 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zng5p\" (UniqueName: \"kubernetes.io/projected/81d54708-f68a-4e0b-b8e4-699a15e89f03-kube-api-access-zng5p\") pod \"redhat-operators-4r26n\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.287686 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.288672 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.294802 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.295059 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.299133 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgxgn"] Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.301433 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.338819 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.354424 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ce0449a-5b37-455c-a796-e7af45f796aa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5ce0449a-5b37-455c-a796-e7af45f796aa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.354498 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ce0449a-5b37-455c-a796-e7af45f796aa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5ce0449a-5b37-455c-a796-e7af45f796aa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.416922 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z9478"] Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.418344 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.419111 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9478"] Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.456566 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ce0449a-5b37-455c-a796-e7af45f796aa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5ce0449a-5b37-455c-a796-e7af45f796aa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.456694 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ce0449a-5b37-455c-a796-e7af45f796aa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5ce0449a-5b37-455c-a796-e7af45f796aa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.456795 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ce0449a-5b37-455c-a796-e7af45f796aa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5ce0449a-5b37-455c-a796-e7af45f796aa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.479429 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ce0449a-5b37-455c-a796-e7af45f796aa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5ce0449a-5b37-455c-a796-e7af45f796aa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.557975 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p57tf\" (UniqueName: \"kubernetes.io/projected/f7fac84c-a087-48c4-8545-c4eef1dc364b-kube-api-access-p57tf\") pod \"redhat-operators-z9478\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.558029 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-utilities\") pod \"redhat-operators-z9478\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.558088 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-catalog-content\") pod \"redhat-operators-z9478\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.581115 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.616365 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.659117 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p57tf\" (UniqueName: \"kubernetes.io/projected/f7fac84c-a087-48c4-8545-c4eef1dc364b-kube-api-access-p57tf\") pod \"redhat-operators-z9478\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.659174 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-utilities\") pod \"redhat-operators-z9478\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.659244 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-catalog-content\") pod \"redhat-operators-z9478\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.659933 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-catalog-content\") pod \"redhat-operators-z9478\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.660202 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-utilities\") pod \"redhat-operators-z9478\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.690261 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.694385 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:13 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:13 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:13 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.695280 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.707039 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p57tf\" (UniqueName: \"kubernetes.io/projected/f7fac84c-a087-48c4-8545-c4eef1dc364b-kube-api-access-p57tf\") pod \"redhat-operators-z9478\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.719684 4728 generic.go:334] "Generic (PLEG): container finished" podID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" containerID="94c7011c0c984ecfe7ca1e4a7c89956b5fdf04cc7e69de4a9e241504c267ab64" exitCode=0 Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.719783 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7djd8" event={"ID":"af9c8d19-58ae-479c-8c47-3ce89d9c803c","Type":"ContainerDied","Data":"94c7011c0c984ecfe7ca1e4a7c89956b5fdf04cc7e69de4a9e241504c267ab64"} Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.719814 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7djd8" event={"ID":"af9c8d19-58ae-479c-8c47-3ce89d9c803c","Type":"ContainerStarted","Data":"19c3cd9114029b19f07e3c70e3bc3fae0ad4c93322810f3f3d1dac293ef68ac3"} Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.724123 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgxgn" event={"ID":"8c5be741-eb53-486a-8af4-1e0b4974ddb7","Type":"ContainerStarted","Data":"9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48"} Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.724165 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgxgn" event={"ID":"8c5be741-eb53-486a-8af4-1e0b4974ddb7","Type":"ContainerStarted","Data":"e093c81057eaef107e4a0f66a2b6f4adff2f4d747f8f46af29c81f9542806231"} Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.730285 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmm7n" Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.810299 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4r26n"] Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.812928 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:13 crc kubenswrapper[4728]: W0204 11:30:13.850300 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81d54708_f68a_4e0b_b8e4_699a15e89f03.slice/crio-76e62adfbb8ee2c65f86e80d94f48938949f0d4aaffc51e93001521cf4089853 WatchSource:0}: Error finding container 76e62adfbb8ee2c65f86e80d94f48938949f0d4aaffc51e93001521cf4089853: Status 404 returned error can't find the container with id 76e62adfbb8ee2c65f86e80d94f48938949f0d4aaffc51e93001521cf4089853 Feb 04 11:30:13 crc kubenswrapper[4728]: I0204 11:30:13.961530 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.014589 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.018537 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.023324 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.023557 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.028720 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.197002 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.198689 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.215710 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9478"] Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.235464 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.300505 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.300615 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.300714 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.328797 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.402402 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.402457 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d800f63b-2465-4553-aa78-99fff8f484bb-secret-volume\") pod \"d800f63b-2465-4553-aa78-99fff8f484bb\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.402513 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b56w\" (UniqueName: \"kubernetes.io/projected/d800f63b-2465-4553-aa78-99fff8f484bb-kube-api-access-7b56w\") pod \"d800f63b-2465-4553-aa78-99fff8f484bb\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.402618 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d800f63b-2465-4553-aa78-99fff8f484bb-config-volume\") pod \"d800f63b-2465-4553-aa78-99fff8f484bb\" (UID: \"d800f63b-2465-4553-aa78-99fff8f484bb\") " Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.408339 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d800f63b-2465-4553-aa78-99fff8f484bb-config-volume" (OuterVolumeSpecName: "config-volume") pod "d800f63b-2465-4553-aa78-99fff8f484bb" (UID: "d800f63b-2465-4553-aa78-99fff8f484bb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.408802 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d800f63b-2465-4553-aa78-99fff8f484bb-kube-api-access-7b56w" (OuterVolumeSpecName: "kube-api-access-7b56w") pod "d800f63b-2465-4553-aa78-99fff8f484bb" (UID: "d800f63b-2465-4553-aa78-99fff8f484bb"). InnerVolumeSpecName "kube-api-access-7b56w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.415053 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d800f63b-2465-4553-aa78-99fff8f484bb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d800f63b-2465-4553-aa78-99fff8f484bb" (UID: "d800f63b-2465-4553-aa78-99fff8f484bb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.509710 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d800f63b-2465-4553-aa78-99fff8f484bb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.510009 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d800f63b-2465-4553-aa78-99fff8f484bb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.510019 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b56w\" (UniqueName: \"kubernetes.io/projected/d800f63b-2465-4553-aa78-99fff8f484bb-kube-api-access-7b56w\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.695380 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:14 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:14 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:14 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.695443 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.746571 4728 generic.go:334] "Generic (PLEG): container finished" podID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" containerID="9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48" exitCode=0 Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.746836 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgxgn" event={"ID":"8c5be741-eb53-486a-8af4-1e0b4974ddb7","Type":"ContainerDied","Data":"9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48"} Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.762361 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5ce0449a-5b37-455c-a796-e7af45f796aa","Type":"ContainerStarted","Data":"b08bd8bc673f165384fbc6b95c95de63007942c96a5ed8ce798722ec06bce137"} Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.762392 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5ce0449a-5b37-455c-a796-e7af45f796aa","Type":"ContainerStarted","Data":"b9ff2a1e08994da03710ba865c4f425a6466bee79ae675861ffb6e68d44e3223"} Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.776170 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.776181 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c" event={"ID":"d800f63b-2465-4553-aa78-99fff8f484bb","Type":"ContainerDied","Data":"29ad144e4844acfa2275f690031f8d17538f9aeec53b09aa835141f9063f7008"} Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.779160 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29ad144e4844acfa2275f690031f8d17538f9aeec53b09aa835141f9063f7008" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.785485 4728 generic.go:334] "Generic (PLEG): container finished" podID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerID="16c2e59082b027919f76ec8ecff9ad3d2035ff49ee1bdd70da464257cfe0665c" exitCode=0 Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.785686 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r26n" event={"ID":"81d54708-f68a-4e0b-b8e4-699a15e89f03","Type":"ContainerDied","Data":"16c2e59082b027919f76ec8ecff9ad3d2035ff49ee1bdd70da464257cfe0665c"} Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.785779 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r26n" event={"ID":"81d54708-f68a-4e0b-b8e4-699a15e89f03","Type":"ContainerStarted","Data":"76e62adfbb8ee2c65f86e80d94f48938949f0d4aaffc51e93001521cf4089853"} Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.786172 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.786154002 podStartE2EDuration="1.786154002s" podCreationTimestamp="2026-02-04 11:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:14.782180027 +0000 UTC m=+163.924884412" watchObservedRunningTime="2026-02-04 11:30:14.786154002 +0000 UTC m=+163.928858387" Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.801182 4728 generic.go:334] "Generic (PLEG): container finished" podID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerID="316b103e2e93171e2c4891d1155199ff6e035a64de40327d133872e2df9457b3" exitCode=0 Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.802246 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9478" event={"ID":"f7fac84c-a087-48c4-8545-c4eef1dc364b","Type":"ContainerDied","Data":"316b103e2e93171e2c4891d1155199ff6e035a64de40327d133872e2df9457b3"} Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.802276 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9478" event={"ID":"f7fac84c-a087-48c4-8545-c4eef1dc364b","Type":"ContainerStarted","Data":"0f02ca77d601f8a2eec9f9ec31447123ec3f7f97b389cae95983aa8b1c57b2bd"} Feb 04 11:30:14 crc kubenswrapper[4728]: I0204 11:30:14.819803 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 04 11:30:14 crc kubenswrapper[4728]: W0204 11:30:14.839420 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode46cba7c_1bbe_4772_81c4_c5e88cd0a8f8.slice/crio-7bf79e21afa61258e82fb791e5d3e2f5972d032ff279d58102786d10b6a7a50d WatchSource:0}: Error finding container 7bf79e21afa61258e82fb791e5d3e2f5972d032ff279d58102786d10b6a7a50d: Status 404 returned error can't find the container with id 7bf79e21afa61258e82fb791e5d3e2f5972d032ff279d58102786d10b6a7a50d Feb 04 11:30:15 crc kubenswrapper[4728]: I0204 11:30:15.694066 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:15 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:15 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:15 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:15 crc kubenswrapper[4728]: I0204 11:30:15.694341 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:15 crc kubenswrapper[4728]: I0204 11:30:15.811993 4728 generic.go:334] "Generic (PLEG): container finished" podID="5ce0449a-5b37-455c-a796-e7af45f796aa" containerID="b08bd8bc673f165384fbc6b95c95de63007942c96a5ed8ce798722ec06bce137" exitCode=0 Feb 04 11:30:15 crc kubenswrapper[4728]: I0204 11:30:15.812183 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5ce0449a-5b37-455c-a796-e7af45f796aa","Type":"ContainerDied","Data":"b08bd8bc673f165384fbc6b95c95de63007942c96a5ed8ce798722ec06bce137"} Feb 04 11:30:15 crc kubenswrapper[4728]: I0204 11:30:15.823446 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8","Type":"ContainerStarted","Data":"ddf85ca0069cc62d35815eefef8f51e8629092a2e91fac0cd92326654951b597"} Feb 04 11:30:15 crc kubenswrapper[4728]: I0204 11:30:15.823495 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8","Type":"ContainerStarted","Data":"7bf79e21afa61258e82fb791e5d3e2f5972d032ff279d58102786d10b6a7a50d"} Feb 04 11:30:16 crc kubenswrapper[4728]: I0204 11:30:16.694903 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:16 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:16 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:16 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:16 crc kubenswrapper[4728]: I0204 11:30:16.694982 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:16 crc kubenswrapper[4728]: I0204 11:30:16.854590 4728 generic.go:334] "Generic (PLEG): container finished" podID="e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8" containerID="ddf85ca0069cc62d35815eefef8f51e8629092a2e91fac0cd92326654951b597" exitCode=0 Feb 04 11:30:16 crc kubenswrapper[4728]: I0204 11:30:16.854817 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8","Type":"ContainerDied","Data":"ddf85ca0069cc62d35815eefef8f51e8629092a2e91fac0cd92326654951b597"} Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.220226 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.377902 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ce0449a-5b37-455c-a796-e7af45f796aa-kube-api-access\") pod \"5ce0449a-5b37-455c-a796-e7af45f796aa\" (UID: \"5ce0449a-5b37-455c-a796-e7af45f796aa\") " Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.377975 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ce0449a-5b37-455c-a796-e7af45f796aa-kubelet-dir\") pod \"5ce0449a-5b37-455c-a796-e7af45f796aa\" (UID: \"5ce0449a-5b37-455c-a796-e7af45f796aa\") " Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.378282 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce0449a-5b37-455c-a796-e7af45f796aa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5ce0449a-5b37-455c-a796-e7af45f796aa" (UID: "5ce0449a-5b37-455c-a796-e7af45f796aa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.383926 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce0449a-5b37-455c-a796-e7af45f796aa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5ce0449a-5b37-455c-a796-e7af45f796aa" (UID: "5ce0449a-5b37-455c-a796-e7af45f796aa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.478997 4728 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ce0449a-5b37-455c-a796-e7af45f796aa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.479039 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ce0449a-5b37-455c-a796-e7af45f796aa-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.693543 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:17 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:17 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:17 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.693604 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.889785 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.889879 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5ce0449a-5b37-455c-a796-e7af45f796aa","Type":"ContainerDied","Data":"b9ff2a1e08994da03710ba865c4f425a6466bee79ae675861ffb6e68d44e3223"} Feb 04 11:30:17 crc kubenswrapper[4728]: I0204 11:30:17.889921 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9ff2a1e08994da03710ba865c4f425a6466bee79ae675861ffb6e68d44e3223" Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.207695 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.288569 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kube-api-access\") pod \"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8\" (UID: \"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8\") " Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.288647 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kubelet-dir\") pod \"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8\" (UID: \"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8\") " Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.289030 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8" (UID: "e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.292298 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8" (UID: "e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.390867 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.390917 4728 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.594390 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5695c" Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.693661 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:18 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:18 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:18 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.693746 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.911620 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8","Type":"ContainerDied","Data":"7bf79e21afa61258e82fb791e5d3e2f5972d032ff279d58102786d10b6a7a50d"} Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.911677 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf79e21afa61258e82fb791e5d3e2f5972d032ff279d58102786d10b6a7a50d" Feb 04 11:30:18 crc kubenswrapper[4728]: I0204 11:30:18.911745 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 04 11:30:19 crc kubenswrapper[4728]: I0204 11:30:19.432108 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:30:19 crc kubenswrapper[4728]: I0204 11:30:19.437185 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd2519d-be03-457c-b9d6-70862115f6a9-metrics-certs\") pod \"network-metrics-daemon-q6m9t\" (UID: \"8fd2519d-be03-457c-b9d6-70862115f6a9\") " pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:30:19 crc kubenswrapper[4728]: I0204 11:30:19.689104 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q6m9t" Feb 04 11:30:19 crc kubenswrapper[4728]: I0204 11:30:19.693304 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:19 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:19 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:19 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:19 crc kubenswrapper[4728]: I0204 11:30:19.693392 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:20 crc kubenswrapper[4728]: I0204 11:30:20.693467 4728 patch_prober.go:28] interesting pod/router-default-5444994796-zm7m8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 04 11:30:20 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 04 11:30:20 crc kubenswrapper[4728]: [+]process-running ok Feb 04 11:30:20 crc kubenswrapper[4728]: healthz check failed Feb 04 11:30:20 crc kubenswrapper[4728]: I0204 11:30:20.693537 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zm7m8" podUID="335a17f2-115c-479a-9dfb-01f13b079108" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 04 11:30:21 crc kubenswrapper[4728]: I0204 11:30:21.700829 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:21 crc kubenswrapper[4728]: I0204 11:30:21.706507 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zm7m8" Feb 04 11:30:22 crc kubenswrapper[4728]: I0204 11:30:22.673217 4728 patch_prober.go:28] interesting pod/console-f9d7485db-c4ckr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 04 11:30:22 crc kubenswrapper[4728]: I0204 11:30:22.673277 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c4ckr" podUID="86a5137c-eb55-438a-8e8d-99f2a2d4bf48" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 04 11:30:22 crc kubenswrapper[4728]: I0204 11:30:22.690884 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-l4qn4" Feb 04 11:30:25 crc kubenswrapper[4728]: I0204 11:30:25.945086 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tlnkw"] Feb 04 11:30:25 crc kubenswrapper[4728]: I0204 11:30:25.945558 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" podUID="15f8fdf4-3a93-4957-ad97-a1a376d821cd" containerName="controller-manager" containerID="cri-o://6c26de13a8ecee907c50d456b9b05e24b59f1cea21560659c460185391e4b0a4" gracePeriod=30 Feb 04 11:30:25 crc kubenswrapper[4728]: I0204 11:30:25.947978 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf"] Feb 04 11:30:25 crc kubenswrapper[4728]: I0204 11:30:25.949000 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" podUID="bdb06213-4bce-43d5-b16f-0bc09dc118fe" containerName="route-controller-manager" containerID="cri-o://c6d26280ed7ab5931cbd67528bf6b0c7ddc3341dd56954401e1e0f10e947d967" gracePeriod=30 Feb 04 11:30:26 crc kubenswrapper[4728]: I0204 11:30:26.987112 4728 generic.go:334] "Generic (PLEG): container finished" podID="15f8fdf4-3a93-4957-ad97-a1a376d821cd" containerID="6c26de13a8ecee907c50d456b9b05e24b59f1cea21560659c460185391e4b0a4" exitCode=0 Feb 04 11:30:26 crc kubenswrapper[4728]: I0204 11:30:26.987325 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" event={"ID":"15f8fdf4-3a93-4957-ad97-a1a376d821cd","Type":"ContainerDied","Data":"6c26de13a8ecee907c50d456b9b05e24b59f1cea21560659c460185391e4b0a4"} Feb 04 11:30:26 crc kubenswrapper[4728]: I0204 11:30:26.989361 4728 generic.go:334] "Generic (PLEG): container finished" podID="bdb06213-4bce-43d5-b16f-0bc09dc118fe" containerID="c6d26280ed7ab5931cbd67528bf6b0c7ddc3341dd56954401e1e0f10e947d967" exitCode=0 Feb 04 11:30:26 crc kubenswrapper[4728]: I0204 11:30:26.989398 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" event={"ID":"bdb06213-4bce-43d5-b16f-0bc09dc118fe","Type":"ContainerDied","Data":"c6d26280ed7ab5931cbd67528bf6b0c7ddc3341dd56954401e1e0f10e947d967"} Feb 04 11:30:27 crc kubenswrapper[4728]: I0204 11:30:27.245373 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q6m9t"] Feb 04 11:30:27 crc kubenswrapper[4728]: W0204 11:30:27.260363 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fd2519d_be03_457c_b9d6_70862115f6a9.slice/crio-de5526769ed0ea46e706a97968f59030a0a0e733bebbaa885c5dc32c7ecf96b6 WatchSource:0}: Error finding container de5526769ed0ea46e706a97968f59030a0a0e733bebbaa885c5dc32c7ecf96b6: Status 404 returned error can't find the container with id de5526769ed0ea46e706a97968f59030a0a0e733bebbaa885c5dc32c7ecf96b6 Feb 04 11:30:28 crc kubenswrapper[4728]: I0204 11:30:28.000842 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" event={"ID":"8fd2519d-be03-457c-b9d6-70862115f6a9","Type":"ContainerStarted","Data":"de5526769ed0ea46e706a97968f59030a0a0e733bebbaa885c5dc32c7ecf96b6"} Feb 04 11:30:31 crc kubenswrapper[4728]: I0204 11:30:31.186045 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:30:32 crc kubenswrapper[4728]: I0204 11:30:32.680680 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:32 crc kubenswrapper[4728]: I0204 11:30:32.686899 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:30:33 crc kubenswrapper[4728]: I0204 11:30:33.767797 4728 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ww7sf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 04 11:30:33 crc kubenswrapper[4728]: I0204 11:30:33.768241 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" podUID="bdb06213-4bce-43d5-b16f-0bc09dc118fe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 04 11:30:33 crc kubenswrapper[4728]: I0204 11:30:33.825156 4728 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tlnkw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 04 11:30:33 crc kubenswrapper[4728]: I0204 11:30:33.825216 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" podUID="15f8fdf4-3a93-4957-ad97-a1a376d821cd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 04 11:30:35 crc kubenswrapper[4728]: I0204 11:30:35.448696 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:30:35 crc kubenswrapper[4728]: I0204 11:30:35.448817 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:30:39 crc kubenswrapper[4728]: I0204 11:30:39.670469 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 04 11:30:42 crc kubenswrapper[4728]: E0204 11:30:42.423346 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 04 11:30:42 crc kubenswrapper[4728]: E0204 11:30:42.423595 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5lpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-g9ct6_openshift-marketplace(c17fa247-ec01-449d-9888-ab485b1496a6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 11:30:42 crc kubenswrapper[4728]: E0204 11:30:42.424725 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-g9ct6" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" Feb 04 11:30:43 crc kubenswrapper[4728]: I0204 11:30:43.167611 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z5qx6" Feb 04 11:30:43 crc kubenswrapper[4728]: I0204 11:30:43.766970 4728 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ww7sf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 04 11:30:43 crc kubenswrapper[4728]: I0204 11:30:43.767058 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" podUID="bdb06213-4bce-43d5-b16f-0bc09dc118fe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 04 11:30:43 crc kubenswrapper[4728]: E0204 11:30:43.809350 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 04 11:30:43 crc kubenswrapper[4728]: E0204 11:30:43.809776 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zwln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sszrr_openshift-marketplace(68c43db7-d07e-45eb-bd58-6651d8a0e342): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 11:30:43 crc kubenswrapper[4728]: E0204 11:30:43.811051 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sszrr" podUID="68c43db7-d07e-45eb-bd58-6651d8a0e342" Feb 04 11:30:43 crc kubenswrapper[4728]: I0204 11:30:43.824587 4728 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tlnkw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 04 11:30:43 crc kubenswrapper[4728]: I0204 11:30:43.824636 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" podUID="15f8fdf4-3a93-4957-ad97-a1a376d821cd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 04 11:30:43 crc kubenswrapper[4728]: E0204 11:30:43.993696 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-g9ct6" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" Feb 04 11:30:44 crc kubenswrapper[4728]: E0204 11:30:44.700175 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 04 11:30:44 crc kubenswrapper[4728]: E0204 11:30:44.700490 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjthf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bgxgn_openshift-marketplace(8c5be741-eb53-486a-8af4-1e0b4974ddb7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 11:30:44 crc kubenswrapper[4728]: E0204 11:30:44.702208 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-bgxgn" podUID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" Feb 04 11:30:44 crc kubenswrapper[4728]: E0204 11:30:44.730947 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 04 11:30:44 crc kubenswrapper[4728]: E0204 11:30:44.731086 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgk4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-7djd8_openshift-marketplace(af9c8d19-58ae-479c-8c47-3ce89d9c803c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 11:30:44 crc kubenswrapper[4728]: E0204 11:30:44.734215 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-7djd8" podUID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" Feb 04 11:30:48 crc kubenswrapper[4728]: E0204 11:30:48.438405 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sszrr" podUID="68c43db7-d07e-45eb-bd58-6651d8a0e342" Feb 04 11:30:48 crc kubenswrapper[4728]: E0204 11:30:48.438405 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bgxgn" podUID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" Feb 04 11:30:48 crc kubenswrapper[4728]: E0204 11:30:48.438466 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-7djd8" podUID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.499394 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.502304 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.524319 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb06213-4bce-43d5-b16f-0bc09dc118fe-serving-cert\") pod \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.524376 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-config\") pod \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.524429 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtsh2\" (UniqueName: \"kubernetes.io/projected/bdb06213-4bce-43d5-b16f-0bc09dc118fe-kube-api-access-dtsh2\") pod \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.524464 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-client-ca\") pod \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\" (UID: \"bdb06213-4bce-43d5-b16f-0bc09dc118fe\") " Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.524502 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-proxy-ca-bundles\") pod \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.524532 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-config\") pod \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.524552 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8fdf4-3a93-4957-ad97-a1a376d821cd-serving-cert\") pod \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.524577 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kzz5\" (UniqueName: \"kubernetes.io/projected/15f8fdf4-3a93-4957-ad97-a1a376d821cd-kube-api-access-8kzz5\") pod \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.524623 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-client-ca\") pod \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\" (UID: \"15f8fdf4-3a93-4957-ad97-a1a376d821cd\") " Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.527043 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "bdb06213-4bce-43d5-b16f-0bc09dc118fe" (UID: "bdb06213-4bce-43d5-b16f-0bc09dc118fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.531246 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "15f8fdf4-3a93-4957-ad97-a1a376d821cd" (UID: "15f8fdf4-3a93-4957-ad97-a1a376d821cd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.536212 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-client-ca" (OuterVolumeSpecName: "client-ca") pod "15f8fdf4-3a93-4957-ad97-a1a376d821cd" (UID: "15f8fdf4-3a93-4957-ad97-a1a376d821cd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.539702 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-config" (OuterVolumeSpecName: "config") pod "bdb06213-4bce-43d5-b16f-0bc09dc118fe" (UID: "bdb06213-4bce-43d5-b16f-0bc09dc118fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.540343 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-config" (OuterVolumeSpecName: "config") pod "15f8fdf4-3a93-4957-ad97-a1a376d821cd" (UID: "15f8fdf4-3a93-4957-ad97-a1a376d821cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.548480 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb06213-4bce-43d5-b16f-0bc09dc118fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bdb06213-4bce-43d5-b16f-0bc09dc118fe" (UID: "bdb06213-4bce-43d5-b16f-0bc09dc118fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.553258 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f8fdf4-3a93-4957-ad97-a1a376d821cd-kube-api-access-8kzz5" (OuterVolumeSpecName: "kube-api-access-8kzz5") pod "15f8fdf4-3a93-4957-ad97-a1a376d821cd" (UID: "15f8fdf4-3a93-4957-ad97-a1a376d821cd"). InnerVolumeSpecName "kube-api-access-8kzz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.553332 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f8fdf4-3a93-4957-ad97-a1a376d821cd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "15f8fdf4-3a93-4957-ad97-a1a376d821cd" (UID: "15f8fdf4-3a93-4957-ad97-a1a376d821cd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.553444 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdb06213-4bce-43d5-b16f-0bc09dc118fe-kube-api-access-dtsh2" (OuterVolumeSpecName: "kube-api-access-dtsh2") pod "bdb06213-4bce-43d5-b16f-0bc09dc118fe" (UID: "bdb06213-4bce-43d5-b16f-0bc09dc118fe"). InnerVolumeSpecName "kube-api-access-dtsh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:30:48 crc kubenswrapper[4728]: E0204 11:30:48.553581 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 04 11:30:48 crc kubenswrapper[4728]: E0204 11:30:48.554004 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zng5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4r26n_openshift-marketplace(81d54708-f68a-4e0b-b8e4-699a15e89f03): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 11:30:48 crc kubenswrapper[4728]: E0204 11:30:48.556785 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4r26n" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.567198 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6"] Feb 04 11:30:48 crc kubenswrapper[4728]: E0204 11:30:48.567429 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce0449a-5b37-455c-a796-e7af45f796aa" containerName="pruner" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.567443 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce0449a-5b37-455c-a796-e7af45f796aa" containerName="pruner" Feb 04 11:30:48 crc kubenswrapper[4728]: E0204 11:30:48.567462 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8" containerName="pruner" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.567470 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8" containerName="pruner" Feb 04 11:30:48 crc kubenswrapper[4728]: E0204 11:30:48.567486 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb06213-4bce-43d5-b16f-0bc09dc118fe" containerName="route-controller-manager" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.567495 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb06213-4bce-43d5-b16f-0bc09dc118fe" containerName="route-controller-manager" Feb 04 11:30:48 crc kubenswrapper[4728]: E0204 11:30:48.567511 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f8fdf4-3a93-4957-ad97-a1a376d821cd" containerName="controller-manager" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.567519 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f8fdf4-3a93-4957-ad97-a1a376d821cd" containerName="controller-manager" Feb 04 11:30:48 crc kubenswrapper[4728]: E0204 11:30:48.567529 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d800f63b-2465-4553-aa78-99fff8f484bb" containerName="collect-profiles" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.567537 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d800f63b-2465-4553-aa78-99fff8f484bb" containerName="collect-profiles" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.567669 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce0449a-5b37-455c-a796-e7af45f796aa" containerName="pruner" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.567682 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdb06213-4bce-43d5-b16f-0bc09dc118fe" containerName="route-controller-manager" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.567697 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d800f63b-2465-4553-aa78-99fff8f484bb" containerName="collect-profiles" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.567709 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f8fdf4-3a93-4957-ad97-a1a376d821cd" containerName="controller-manager" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.567721 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e46cba7c-1bbe-4772-81c4-c5e88cd0a8f8" containerName="pruner" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.568426 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.571615 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6"] Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637086 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-client-ca\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637137 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-config\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637172 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa5fcb4-4586-420d-a621-076ae4c7cf86-serving-cert\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637194 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw72n\" (UniqueName: \"kubernetes.io/projected/efa5fcb4-4586-420d-a621-076ae4c7cf86-kube-api-access-lw72n\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637542 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kzz5\" (UniqueName: \"kubernetes.io/projected/15f8fdf4-3a93-4957-ad97-a1a376d821cd-kube-api-access-8kzz5\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637579 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637594 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb06213-4bce-43d5-b16f-0bc09dc118fe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637605 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637617 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtsh2\" (UniqueName: \"kubernetes.io/projected/bdb06213-4bce-43d5-b16f-0bc09dc118fe-kube-api-access-dtsh2\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637629 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdb06213-4bce-43d5-b16f-0bc09dc118fe-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637639 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637650 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f8fdf4-3a93-4957-ad97-a1a376d821cd-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.637659 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8fdf4-3a93-4957-ad97-a1a376d821cd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.738631 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-client-ca\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.738674 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-config\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.738696 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa5fcb4-4586-420d-a621-076ae4c7cf86-serving-cert\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.738712 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw72n\" (UniqueName: \"kubernetes.io/projected/efa5fcb4-4586-420d-a621-076ae4c7cf86-kube-api-access-lw72n\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.739561 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-client-ca\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.740523 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-config\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.746390 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa5fcb4-4586-420d-a621-076ae4c7cf86-serving-cert\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.760430 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw72n\" (UniqueName: \"kubernetes.io/projected/efa5fcb4-4586-420d-a621-076ae4c7cf86-kube-api-access-lw72n\") pod \"route-controller-manager-77c7c86cd-xjxp6\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:48 crc kubenswrapper[4728]: I0204 11:30:48.900293 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.111892 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" event={"ID":"bdb06213-4bce-43d5-b16f-0bc09dc118fe","Type":"ContainerDied","Data":"12e1f50fa68087570d08450aaca0398799e0f5f063bb4b644ee33940017cf2c7"} Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.112141 4728 scope.go:117] "RemoveContainer" containerID="c6d26280ed7ab5931cbd67528bf6b0c7ddc3341dd56954401e1e0f10e947d967" Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.112208 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf" Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.114802 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" event={"ID":"15f8fdf4-3a93-4957-ad97-a1a376d821cd","Type":"ContainerDied","Data":"37d0b7ba7993499c98a6f1755994f5d5226496ed58df14b399313c2c99b58173"} Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.115017 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tlnkw" Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.159708 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tlnkw"] Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.162631 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tlnkw"] Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.168428 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf"] Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.170976 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ww7sf"] Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.560419 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15f8fdf4-3a93-4957-ad97-a1a376d821cd" path="/var/lib/kubelet/pods/15f8fdf4-3a93-4957-ad97-a1a376d821cd/volumes" Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.561605 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdb06213-4bce-43d5-b16f-0bc09dc118fe" path="/var/lib/kubelet/pods/bdb06213-4bce-43d5-b16f-0bc09dc118fe/volumes" Feb 04 11:30:49 crc kubenswrapper[4728]: E0204 11:30:49.938924 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4r26n" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" Feb 04 11:30:49 crc kubenswrapper[4728]: I0204 11:30:49.956437 4728 scope.go:117] "RemoveContainer" containerID="6c26de13a8ecee907c50d456b9b05e24b59f1cea21560659c460185391e4b0a4" Feb 04 11:30:50 crc kubenswrapper[4728]: E0204 11:30:50.026646 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 04 11:30:50 crc kubenswrapper[4728]: E0204 11:30:50.026828 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csmts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-w94hf_openshift-marketplace(b13c4294-fd84-478b-b4a0-321a5d706499): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 11:30:50 crc kubenswrapper[4728]: E0204 11:30:50.028066 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-w94hf" podUID="b13c4294-fd84-478b-b4a0-321a5d706499" Feb 04 11:30:50 crc kubenswrapper[4728]: E0204 11:30:50.052125 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 04 11:30:50 crc kubenswrapper[4728]: E0204 11:30:50.052418 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6twpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vrmfp_openshift-marketplace(eef08543-a746-4aad-a4be-5ee0bb7464a8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 04 11:30:50 crc kubenswrapper[4728]: E0204 11:30:50.053527 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vrmfp" podUID="eef08543-a746-4aad-a4be-5ee0bb7464a8" Feb 04 11:30:50 crc kubenswrapper[4728]: E0204 11:30:50.136425 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-w94hf" podUID="b13c4294-fd84-478b-b4a0-321a5d706499" Feb 04 11:30:50 crc kubenswrapper[4728]: E0204 11:30:50.137437 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vrmfp" podUID="eef08543-a746-4aad-a4be-5ee0bb7464a8" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.187239 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6"] Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.913158 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65c95f485d-bgxch"] Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.914495 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.916841 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.917068 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.917199 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.917711 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.919171 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.927372 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65c95f485d-bgxch"] Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.939052 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.943557 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.964616 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-proxy-ca-bundles\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.964697 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-client-ca\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.964739 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs65k\" (UniqueName: \"kubernetes.io/projected/3af42db6-6320-4a09-81c8-8c4af6149632-kube-api-access-fs65k\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.964823 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-config\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:50 crc kubenswrapper[4728]: I0204 11:30:50.964853 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3af42db6-6320-4a09-81c8-8c4af6149632-serving-cert\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.065539 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-proxy-ca-bundles\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.065608 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-client-ca\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.065634 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs65k\" (UniqueName: \"kubernetes.io/projected/3af42db6-6320-4a09-81c8-8c4af6149632-kube-api-access-fs65k\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.065677 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-config\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.065698 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3af42db6-6320-4a09-81c8-8c4af6149632-serving-cert\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.066909 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-proxy-ca-bundles\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.066986 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-client-ca\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.068036 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-config\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.074485 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3af42db6-6320-4a09-81c8-8c4af6149632-serving-cert\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.088843 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs65k\" (UniqueName: \"kubernetes.io/projected/3af42db6-6320-4a09-81c8-8c4af6149632-kube-api-access-fs65k\") pod \"controller-manager-65c95f485d-bgxch\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.140939 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" event={"ID":"efa5fcb4-4586-420d-a621-076ae4c7cf86","Type":"ContainerStarted","Data":"8aea87d06b1ed666b242ee4e0e03dbd2676587dc95ed71a9994621aba0014a8e"} Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.141018 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" event={"ID":"efa5fcb4-4586-420d-a621-076ae4c7cf86","Type":"ContainerStarted","Data":"d1a34ca5aba0dbed8748d900a6aa46257458ed7c5d4ff65058d3ac6d5071f77f"} Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.141429 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.142494 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" event={"ID":"8fd2519d-be03-457c-b9d6-70862115f6a9","Type":"ContainerStarted","Data":"7a89b5cbd2efa4e82af3b133dada238bd7033ab5f9da2366b809b86986a888f2"} Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.142518 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q6m9t" event={"ID":"8fd2519d-be03-457c-b9d6-70862115f6a9","Type":"ContainerStarted","Data":"f40b378cc4b4553faf189b608c04d5b9b7b0c7a47f7743716ecd260898c0c163"} Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.144474 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9478" event={"ID":"f7fac84c-a087-48c4-8545-c4eef1dc364b","Type":"ContainerStarted","Data":"845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8"} Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.146703 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.160678 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" podStartSLOduration=5.160655444 podStartE2EDuration="5.160655444s" podCreationTimestamp="2026-02-04 11:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:51.158302171 +0000 UTC m=+200.301006566" watchObservedRunningTime="2026-02-04 11:30:51.160655444 +0000 UTC m=+200.303359829" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.182460 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-q6m9t" podStartSLOduration=174.182437975 podStartE2EDuration="2m54.182437975s" podCreationTimestamp="2026-02-04 11:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:51.179067481 +0000 UTC m=+200.321771886" watchObservedRunningTime="2026-02-04 11:30:51.182437975 +0000 UTC m=+200.325142360" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.240976 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.471152 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65c95f485d-bgxch"] Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.617811 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.619168 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.619288 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.622017 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.622170 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.671722 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/221f6e51-511a-4e58-a314-69a70aa4dfa9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"221f6e51-511a-4e58-a314-69a70aa4dfa9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.671822 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/221f6e51-511a-4e58-a314-69a70aa4dfa9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"221f6e51-511a-4e58-a314-69a70aa4dfa9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.773192 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/221f6e51-511a-4e58-a314-69a70aa4dfa9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"221f6e51-511a-4e58-a314-69a70aa4dfa9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.773278 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/221f6e51-511a-4e58-a314-69a70aa4dfa9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"221f6e51-511a-4e58-a314-69a70aa4dfa9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.773394 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/221f6e51-511a-4e58-a314-69a70aa4dfa9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"221f6e51-511a-4e58-a314-69a70aa4dfa9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.800283 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/221f6e51-511a-4e58-a314-69a70aa4dfa9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"221f6e51-511a-4e58-a314-69a70aa4dfa9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 11:30:51 crc kubenswrapper[4728]: I0204 11:30:51.936720 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 11:30:52 crc kubenswrapper[4728]: I0204 11:30:52.150347 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" event={"ID":"3af42db6-6320-4a09-81c8-8c4af6149632","Type":"ContainerStarted","Data":"be2ed7549686f6e8f37494e214df16d104ab12db64812bb02b872039595575d9"} Feb 04 11:30:52 crc kubenswrapper[4728]: I0204 11:30:52.150648 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" event={"ID":"3af42db6-6320-4a09-81c8-8c4af6149632","Type":"ContainerStarted","Data":"df06e9b21160bc79d0dd967999a44d4718418dc992a93104eb40959f27b829d2"} Feb 04 11:30:52 crc kubenswrapper[4728]: I0204 11:30:52.150991 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:52 crc kubenswrapper[4728]: I0204 11:30:52.153217 4728 generic.go:334] "Generic (PLEG): container finished" podID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerID="845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8" exitCode=0 Feb 04 11:30:52 crc kubenswrapper[4728]: I0204 11:30:52.154017 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9478" event={"ID":"f7fac84c-a087-48c4-8545-c4eef1dc364b","Type":"ContainerDied","Data":"845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8"} Feb 04 11:30:52 crc kubenswrapper[4728]: I0204 11:30:52.156482 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:30:52 crc kubenswrapper[4728]: I0204 11:30:52.168565 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" podStartSLOduration=7.168543478 podStartE2EDuration="7.168543478s" podCreationTimestamp="2026-02-04 11:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:52.165210525 +0000 UTC m=+201.307914910" watchObservedRunningTime="2026-02-04 11:30:52.168543478 +0000 UTC m=+201.311247863" Feb 04 11:30:52 crc kubenswrapper[4728]: I0204 11:30:52.350833 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 04 11:30:53 crc kubenswrapper[4728]: I0204 11:30:53.162059 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"221f6e51-511a-4e58-a314-69a70aa4dfa9","Type":"ContainerStarted","Data":"31d3ff49cc19d4cf5762ccdb7f76d2058b5c09c8bb021fd1346d5bc215125dca"} Feb 04 11:30:53 crc kubenswrapper[4728]: I0204 11:30:53.162388 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"221f6e51-511a-4e58-a314-69a70aa4dfa9","Type":"ContainerStarted","Data":"0ec4e53141fc7d3d3c17debdeb977949355492480ce3264aa2a74042934c40cd"} Feb 04 11:30:53 crc kubenswrapper[4728]: I0204 11:30:53.164677 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9478" event={"ID":"f7fac84c-a087-48c4-8545-c4eef1dc364b","Type":"ContainerStarted","Data":"f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1"} Feb 04 11:30:53 crc kubenswrapper[4728]: I0204 11:30:53.199684 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z9478" podStartSLOduration=2.388544498 podStartE2EDuration="40.199663498s" podCreationTimestamp="2026-02-04 11:30:13 +0000 UTC" firstStartedPulling="2026-02-04 11:30:14.814821041 +0000 UTC m=+163.957525426" lastFinishedPulling="2026-02-04 11:30:52.625940041 +0000 UTC m=+201.768644426" observedRunningTime="2026-02-04 11:30:53.199321438 +0000 UTC m=+202.342025823" watchObservedRunningTime="2026-02-04 11:30:53.199663498 +0000 UTC m=+202.342367873" Feb 04 11:30:53 crc kubenswrapper[4728]: I0204 11:30:53.200164 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.200157123 podStartE2EDuration="2.200157123s" podCreationTimestamp="2026-02-04 11:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:53.179354702 +0000 UTC m=+202.322059087" watchObservedRunningTime="2026-02-04 11:30:53.200157123 +0000 UTC m=+202.342861508" Feb 04 11:30:53 crc kubenswrapper[4728]: I0204 11:30:53.813193 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:53 crc kubenswrapper[4728]: I0204 11:30:53.813237 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:30:54 crc kubenswrapper[4728]: I0204 11:30:54.171447 4728 generic.go:334] "Generic (PLEG): container finished" podID="221f6e51-511a-4e58-a314-69a70aa4dfa9" containerID="31d3ff49cc19d4cf5762ccdb7f76d2058b5c09c8bb021fd1346d5bc215125dca" exitCode=0 Feb 04 11:30:54 crc kubenswrapper[4728]: I0204 11:30:54.171555 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"221f6e51-511a-4e58-a314-69a70aa4dfa9","Type":"ContainerDied","Data":"31d3ff49cc19d4cf5762ccdb7f76d2058b5c09c8bb021fd1346d5bc215125dca"} Feb 04 11:30:54 crc kubenswrapper[4728]: I0204 11:30:54.940819 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9478" podUID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerName="registry-server" probeResult="failure" output=< Feb 04 11:30:54 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 04 11:30:54 crc kubenswrapper[4728]: > Feb 04 11:30:55 crc kubenswrapper[4728]: I0204 11:30:55.448367 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 11:30:55 crc kubenswrapper[4728]: I0204 11:30:55.534660 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/221f6e51-511a-4e58-a314-69a70aa4dfa9-kubelet-dir\") pod \"221f6e51-511a-4e58-a314-69a70aa4dfa9\" (UID: \"221f6e51-511a-4e58-a314-69a70aa4dfa9\") " Feb 04 11:30:55 crc kubenswrapper[4728]: I0204 11:30:55.534832 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/221f6e51-511a-4e58-a314-69a70aa4dfa9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "221f6e51-511a-4e58-a314-69a70aa4dfa9" (UID: "221f6e51-511a-4e58-a314-69a70aa4dfa9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:30:55 crc kubenswrapper[4728]: I0204 11:30:55.534892 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/221f6e51-511a-4e58-a314-69a70aa4dfa9-kube-api-access\") pod \"221f6e51-511a-4e58-a314-69a70aa4dfa9\" (UID: \"221f6e51-511a-4e58-a314-69a70aa4dfa9\") " Feb 04 11:30:55 crc kubenswrapper[4728]: I0204 11:30:55.535222 4728 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/221f6e51-511a-4e58-a314-69a70aa4dfa9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:55 crc kubenswrapper[4728]: I0204 11:30:55.541552 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/221f6e51-511a-4e58-a314-69a70aa4dfa9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "221f6e51-511a-4e58-a314-69a70aa4dfa9" (UID: "221f6e51-511a-4e58-a314-69a70aa4dfa9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:30:55 crc kubenswrapper[4728]: I0204 11:30:55.636319 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/221f6e51-511a-4e58-a314-69a70aa4dfa9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.195463 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"221f6e51-511a-4e58-a314-69a70aa4dfa9","Type":"ContainerDied","Data":"0ec4e53141fc7d3d3c17debdeb977949355492480ce3264aa2a74042934c40cd"} Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.195497 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.195506 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec4e53141fc7d3d3c17debdeb977949355492480ce3264aa2a74042934c40cd" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.401880 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 04 11:30:56 crc kubenswrapper[4728]: E0204 11:30:56.402171 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221f6e51-511a-4e58-a314-69a70aa4dfa9" containerName="pruner" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.402197 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="221f6e51-511a-4e58-a314-69a70aa4dfa9" containerName="pruner" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.402360 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="221f6e51-511a-4e58-a314-69a70aa4dfa9" containerName="pruner" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.402838 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.408387 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.408457 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.415335 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.445672 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.445779 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84324b99-575b-4d1e-963a-4ce98447b52b-kube-api-access\") pod \"installer-9-crc\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.445811 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-var-lock\") pod \"installer-9-crc\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.547083 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84324b99-575b-4d1e-963a-4ce98447b52b-kube-api-access\") pod \"installer-9-crc\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.547134 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-var-lock\") pod \"installer-9-crc\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.547209 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.547308 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.547379 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-var-lock\") pod \"installer-9-crc\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.574709 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84324b99-575b-4d1e-963a-4ce98447b52b-kube-api-access\") pod \"installer-9-crc\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:30:56 crc kubenswrapper[4728]: I0204 11:30:56.744734 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:30:57 crc kubenswrapper[4728]: I0204 11:30:57.133723 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 04 11:30:57 crc kubenswrapper[4728]: I0204 11:30:57.208861 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84324b99-575b-4d1e-963a-4ce98447b52b","Type":"ContainerStarted","Data":"b0c5aa1899265e7189527a487a9ea770f1689c77616e132a5a71ea640cca61c3"} Feb 04 11:30:58 crc kubenswrapper[4728]: I0204 11:30:58.214989 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9ct6" event={"ID":"c17fa247-ec01-449d-9888-ab485b1496a6","Type":"ContainerStarted","Data":"db4fae948f050049f75cb1187649d48dc7341d141106d3c0c8e9f31830dc713f"} Feb 04 11:30:58 crc kubenswrapper[4728]: I0204 11:30:58.216513 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84324b99-575b-4d1e-963a-4ce98447b52b","Type":"ContainerStarted","Data":"6ebedb15b6f29276904a9653732b9ca52d959836c780895805fc26b1b11019f9"} Feb 04 11:30:58 crc kubenswrapper[4728]: I0204 11:30:58.251371 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.251341067 podStartE2EDuration="2.251341067s" podCreationTimestamp="2026-02-04 11:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:30:58.249964144 +0000 UTC m=+207.392668529" watchObservedRunningTime="2026-02-04 11:30:58.251341067 +0000 UTC m=+207.394045452" Feb 04 11:30:59 crc kubenswrapper[4728]: I0204 11:30:59.223474 4728 generic.go:334] "Generic (PLEG): container finished" podID="c17fa247-ec01-449d-9888-ab485b1496a6" containerID="db4fae948f050049f75cb1187649d48dc7341d141106d3c0c8e9f31830dc713f" exitCode=0 Feb 04 11:30:59 crc kubenswrapper[4728]: I0204 11:30:59.223543 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9ct6" event={"ID":"c17fa247-ec01-449d-9888-ab485b1496a6","Type":"ContainerDied","Data":"db4fae948f050049f75cb1187649d48dc7341d141106d3c0c8e9f31830dc713f"} Feb 04 11:31:00 crc kubenswrapper[4728]: I0204 11:31:00.232973 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9ct6" event={"ID":"c17fa247-ec01-449d-9888-ab485b1496a6","Type":"ContainerStarted","Data":"982bed018f143b767f9419cf57903de9929d1244f891181df3a1a3c901801901"} Feb 04 11:31:00 crc kubenswrapper[4728]: I0204 11:31:00.258306 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g9ct6" podStartSLOduration=1.999497604 podStartE2EDuration="50.258284603s" podCreationTimestamp="2026-02-04 11:30:10 +0000 UTC" firstStartedPulling="2026-02-04 11:30:11.606406872 +0000 UTC m=+160.749111257" lastFinishedPulling="2026-02-04 11:30:59.865193871 +0000 UTC m=+209.007898256" observedRunningTime="2026-02-04 11:31:00.252562047 +0000 UTC m=+209.395266452" watchObservedRunningTime="2026-02-04 11:31:00.258284603 +0000 UTC m=+209.400988998" Feb 04 11:31:00 crc kubenswrapper[4728]: I0204 11:31:00.559685 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:31:00 crc kubenswrapper[4728]: I0204 11:31:00.559737 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:31:01 crc kubenswrapper[4728]: I0204 11:31:01.239929 4728 generic.go:334] "Generic (PLEG): container finished" podID="68c43db7-d07e-45eb-bd58-6651d8a0e342" containerID="e5a31f756d4fe49f9dbf0dd9185be0b1c8b50edf7fac91010bd0127e04183999" exitCode=0 Feb 04 11:31:01 crc kubenswrapper[4728]: I0204 11:31:01.240004 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sszrr" event={"ID":"68c43db7-d07e-45eb-bd58-6651d8a0e342","Type":"ContainerDied","Data":"e5a31f756d4fe49f9dbf0dd9185be0b1c8b50edf7fac91010bd0127e04183999"} Feb 04 11:31:01 crc kubenswrapper[4728]: I0204 11:31:01.367583 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cmjx5"] Feb 04 11:31:01 crc kubenswrapper[4728]: I0204 11:31:01.606011 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-g9ct6" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" containerName="registry-server" probeResult="failure" output=< Feb 04 11:31:01 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 04 11:31:01 crc kubenswrapper[4728]: > Feb 04 11:31:03 crc kubenswrapper[4728]: E0204 11:31:03.128775 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf9c8d19_58ae_479c_8c47_3ce89d9c803c.slice/crio-ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf9c8d19_58ae_479c_8c47_3ce89d9c803c.slice/crio-conmon-ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce.scope\": RecentStats: unable to find data in memory cache]" Feb 04 11:31:03 crc kubenswrapper[4728]: I0204 11:31:03.257009 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sszrr" event={"ID":"68c43db7-d07e-45eb-bd58-6651d8a0e342","Type":"ContainerStarted","Data":"7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba"} Feb 04 11:31:03 crc kubenswrapper[4728]: I0204 11:31:03.262129 4728 generic.go:334] "Generic (PLEG): container finished" podID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" containerID="ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce" exitCode=0 Feb 04 11:31:03 crc kubenswrapper[4728]: I0204 11:31:03.262178 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7djd8" event={"ID":"af9c8d19-58ae-479c-8c47-3ce89d9c803c","Type":"ContainerDied","Data":"ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce"} Feb 04 11:31:03 crc kubenswrapper[4728]: I0204 11:31:03.297370 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sszrr" podStartSLOduration=3.405176311 podStartE2EDuration="54.297354911s" podCreationTimestamp="2026-02-04 11:30:09 +0000 UTC" firstStartedPulling="2026-02-04 11:30:11.602616602 +0000 UTC m=+160.745320987" lastFinishedPulling="2026-02-04 11:31:02.494795202 +0000 UTC m=+211.637499587" observedRunningTime="2026-02-04 11:31:03.273872627 +0000 UTC m=+212.416577012" watchObservedRunningTime="2026-02-04 11:31:03.297354911 +0000 UTC m=+212.440059296" Feb 04 11:31:03 crc kubenswrapper[4728]: I0204 11:31:03.891172 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:31:03 crc kubenswrapper[4728]: I0204 11:31:03.938404 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:31:04 crc kubenswrapper[4728]: I0204 11:31:04.268137 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7djd8" event={"ID":"af9c8d19-58ae-479c-8c47-3ce89d9c803c","Type":"ContainerStarted","Data":"ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e"} Feb 04 11:31:04 crc kubenswrapper[4728]: I0204 11:31:04.270523 4728 generic.go:334] "Generic (PLEG): container finished" podID="eef08543-a746-4aad-a4be-5ee0bb7464a8" containerID="806a2e250635702fb48d72288289d7eb14ddbfad6f9470121aa04826bb8c67eb" exitCode=0 Feb 04 11:31:04 crc kubenswrapper[4728]: I0204 11:31:04.270578 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrmfp" event={"ID":"eef08543-a746-4aad-a4be-5ee0bb7464a8","Type":"ContainerDied","Data":"806a2e250635702fb48d72288289d7eb14ddbfad6f9470121aa04826bb8c67eb"} Feb 04 11:31:04 crc kubenswrapper[4728]: I0204 11:31:04.274196 4728 generic.go:334] "Generic (PLEG): container finished" podID="b13c4294-fd84-478b-b4a0-321a5d706499" containerID="f14fc433c2b613974479b8d38955b06dc7fa88542c80a5a8d3d72bcdb556721e" exitCode=0 Feb 04 11:31:04 crc kubenswrapper[4728]: I0204 11:31:04.274633 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w94hf" event={"ID":"b13c4294-fd84-478b-b4a0-321a5d706499","Type":"ContainerDied","Data":"f14fc433c2b613974479b8d38955b06dc7fa88542c80a5a8d3d72bcdb556721e"} Feb 04 11:31:04 crc kubenswrapper[4728]: I0204 11:31:04.296779 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7djd8" podStartSLOduration=3.07175205 podStartE2EDuration="53.296730142s" podCreationTimestamp="2026-02-04 11:30:11 +0000 UTC" firstStartedPulling="2026-02-04 11:30:13.723246413 +0000 UTC m=+162.865950798" lastFinishedPulling="2026-02-04 11:31:03.948224505 +0000 UTC m=+213.090928890" observedRunningTime="2026-02-04 11:31:04.292809702 +0000 UTC m=+213.435514097" watchObservedRunningTime="2026-02-04 11:31:04.296730142 +0000 UTC m=+213.439434537" Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.280805 4728 generic.go:334] "Generic (PLEG): container finished" podID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" containerID="d27816492f6aabeeb239f78ce43f49e69078c5ee7befbfb15fb7879f0e46ff53" exitCode=0 Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.280883 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgxgn" event={"ID":"8c5be741-eb53-486a-8af4-1e0b4974ddb7","Type":"ContainerDied","Data":"d27816492f6aabeeb239f78ce43f49e69078c5ee7befbfb15fb7879f0e46ff53"} Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.283971 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w94hf" event={"ID":"b13c4294-fd84-478b-b4a0-321a5d706499","Type":"ContainerStarted","Data":"0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2"} Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.285443 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r26n" event={"ID":"81d54708-f68a-4e0b-b8e4-699a15e89f03","Type":"ContainerStarted","Data":"4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348"} Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.288259 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrmfp" event={"ID":"eef08543-a746-4aad-a4be-5ee0bb7464a8","Type":"ContainerStarted","Data":"a8e0642ee3710f1469f1046a1f1f6360a72620eb8813877239e5df09a29f9fb6"} Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.349842 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w94hf" podStartSLOduration=3.272699999 podStartE2EDuration="56.34982545s" podCreationTimestamp="2026-02-04 11:30:09 +0000 UTC" firstStartedPulling="2026-02-04 11:30:11.614026964 +0000 UTC m=+160.756731349" lastFinishedPulling="2026-02-04 11:31:04.691152405 +0000 UTC m=+213.833856800" observedRunningTime="2026-02-04 11:31:05.327164531 +0000 UTC m=+214.469868916" watchObservedRunningTime="2026-02-04 11:31:05.34982545 +0000 UTC m=+214.492529835" Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.363196 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vrmfp" podStartSLOduration=2.108080502 podStartE2EDuration="55.363178431s" podCreationTimestamp="2026-02-04 11:30:10 +0000 UTC" firstStartedPulling="2026-02-04 11:30:11.612948136 +0000 UTC m=+160.755652521" lastFinishedPulling="2026-02-04 11:31:04.868046065 +0000 UTC m=+214.010750450" observedRunningTime="2026-02-04 11:31:05.360468617 +0000 UTC m=+214.503173012" watchObservedRunningTime="2026-02-04 11:31:05.363178431 +0000 UTC m=+214.505882836" Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.448028 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.448090 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.448134 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.448716 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.448834 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4" gracePeriod=600 Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.958190 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65c95f485d-bgxch"] Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.958390 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" podUID="3af42db6-6320-4a09-81c8-8c4af6149632" containerName="controller-manager" containerID="cri-o://be2ed7549686f6e8f37494e214df16d104ab12db64812bb02b872039595575d9" gracePeriod=30 Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.987709 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6"] Feb 04 11:31:05 crc kubenswrapper[4728]: I0204 11:31:05.988014 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" podUID="efa5fcb4-4586-420d-a621-076ae4c7cf86" containerName="route-controller-manager" containerID="cri-o://8aea87d06b1ed666b242ee4e0e03dbd2676587dc95ed71a9994621aba0014a8e" gracePeriod=30 Feb 04 11:31:06 crc kubenswrapper[4728]: I0204 11:31:06.294416 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgxgn" event={"ID":"8c5be741-eb53-486a-8af4-1e0b4974ddb7","Type":"ContainerStarted","Data":"07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b"} Feb 04 11:31:06 crc kubenswrapper[4728]: I0204 11:31:06.296040 4728 generic.go:334] "Generic (PLEG): container finished" podID="efa5fcb4-4586-420d-a621-076ae4c7cf86" containerID="8aea87d06b1ed666b242ee4e0e03dbd2676587dc95ed71a9994621aba0014a8e" exitCode=0 Feb 04 11:31:06 crc kubenswrapper[4728]: I0204 11:31:06.296090 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" event={"ID":"efa5fcb4-4586-420d-a621-076ae4c7cf86","Type":"ContainerDied","Data":"8aea87d06b1ed666b242ee4e0e03dbd2676587dc95ed71a9994621aba0014a8e"} Feb 04 11:31:06 crc kubenswrapper[4728]: I0204 11:31:06.302252 4728 generic.go:334] "Generic (PLEG): container finished" podID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerID="4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348" exitCode=0 Feb 04 11:31:06 crc kubenswrapper[4728]: I0204 11:31:06.302337 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r26n" event={"ID":"81d54708-f68a-4e0b-b8e4-699a15e89f03","Type":"ContainerDied","Data":"4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348"} Feb 04 11:31:06 crc kubenswrapper[4728]: I0204 11:31:06.306149 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4" exitCode=0 Feb 04 11:31:06 crc kubenswrapper[4728]: I0204 11:31:06.306224 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4"} Feb 04 11:31:06 crc kubenswrapper[4728]: I0204 11:31:06.312819 4728 generic.go:334] "Generic (PLEG): container finished" podID="3af42db6-6320-4a09-81c8-8c4af6149632" containerID="be2ed7549686f6e8f37494e214df16d104ab12db64812bb02b872039595575d9" exitCode=0 Feb 04 11:31:06 crc kubenswrapper[4728]: I0204 11:31:06.312960 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" event={"ID":"3af42db6-6320-4a09-81c8-8c4af6149632","Type":"ContainerDied","Data":"be2ed7549686f6e8f37494e214df16d104ab12db64812bb02b872039595575d9"} Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.339481 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"70dd38e437063854e1d19f2cb326f62fcfbcc9c4a621e22232ef875b06d7434d"} Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.361719 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bgxgn" podStartSLOduration=3.177530878 podStartE2EDuration="55.361701118s" podCreationTimestamp="2026-02-04 11:30:12 +0000 UTC" firstStartedPulling="2026-02-04 11:30:13.738016714 +0000 UTC m=+162.880721099" lastFinishedPulling="2026-02-04 11:31:05.922186954 +0000 UTC m=+215.064891339" observedRunningTime="2026-02-04 11:31:07.357404555 +0000 UTC m=+216.500108950" watchObservedRunningTime="2026-02-04 11:31:07.361701118 +0000 UTC m=+216.504405503" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.532090 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.538215 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.573514 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n"] Feb 04 11:31:07 crc kubenswrapper[4728]: E0204 11:31:07.573966 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa5fcb4-4586-420d-a621-076ae4c7cf86" containerName="route-controller-manager" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.573983 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa5fcb4-4586-420d-a621-076ae4c7cf86" containerName="route-controller-manager" Feb 04 11:31:07 crc kubenswrapper[4728]: E0204 11:31:07.573992 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af42db6-6320-4a09-81c8-8c4af6149632" containerName="controller-manager" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.573999 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af42db6-6320-4a09-81c8-8c4af6149632" containerName="controller-manager" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.574294 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa5fcb4-4586-420d-a621-076ae4c7cf86" containerName="route-controller-manager" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.574309 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af42db6-6320-4a09-81c8-8c4af6149632" containerName="controller-manager" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.575819 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n"] Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.575984 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.603057 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-config\") pod \"3af42db6-6320-4a09-81c8-8c4af6149632\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.603152 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-proxy-ca-bundles\") pod \"3af42db6-6320-4a09-81c8-8c4af6149632\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.603187 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw72n\" (UniqueName: \"kubernetes.io/projected/efa5fcb4-4586-420d-a621-076ae4c7cf86-kube-api-access-lw72n\") pod \"efa5fcb4-4586-420d-a621-076ae4c7cf86\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.603223 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-config\") pod \"efa5fcb4-4586-420d-a621-076ae4c7cf86\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.603245 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs65k\" (UniqueName: \"kubernetes.io/projected/3af42db6-6320-4a09-81c8-8c4af6149632-kube-api-access-fs65k\") pod \"3af42db6-6320-4a09-81c8-8c4af6149632\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.603268 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-client-ca\") pod \"3af42db6-6320-4a09-81c8-8c4af6149632\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.603298 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa5fcb4-4586-420d-a621-076ae4c7cf86-serving-cert\") pod \"efa5fcb4-4586-420d-a621-076ae4c7cf86\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.603327 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-client-ca\") pod \"efa5fcb4-4586-420d-a621-076ae4c7cf86\" (UID: \"efa5fcb4-4586-420d-a621-076ae4c7cf86\") " Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.603356 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3af42db6-6320-4a09-81c8-8c4af6149632-serving-cert\") pod \"3af42db6-6320-4a09-81c8-8c4af6149632\" (UID: \"3af42db6-6320-4a09-81c8-8c4af6149632\") " Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.604506 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3af42db6-6320-4a09-81c8-8c4af6149632" (UID: "3af42db6-6320-4a09-81c8-8c4af6149632"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.604542 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-client-ca" (OuterVolumeSpecName: "client-ca") pod "3af42db6-6320-4a09-81c8-8c4af6149632" (UID: "3af42db6-6320-4a09-81c8-8c4af6149632"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.605047 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-client-ca" (OuterVolumeSpecName: "client-ca") pod "efa5fcb4-4586-420d-a621-076ae4c7cf86" (UID: "efa5fcb4-4586-420d-a621-076ae4c7cf86"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.605208 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-config" (OuterVolumeSpecName: "config") pod "efa5fcb4-4586-420d-a621-076ae4c7cf86" (UID: "efa5fcb4-4586-420d-a621-076ae4c7cf86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.605363 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-config" (OuterVolumeSpecName: "config") pod "3af42db6-6320-4a09-81c8-8c4af6149632" (UID: "3af42db6-6320-4a09-81c8-8c4af6149632"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.610342 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa5fcb4-4586-420d-a621-076ae4c7cf86-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "efa5fcb4-4586-420d-a621-076ae4c7cf86" (UID: "efa5fcb4-4586-420d-a621-076ae4c7cf86"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.610396 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa5fcb4-4586-420d-a621-076ae4c7cf86-kube-api-access-lw72n" (OuterVolumeSpecName: "kube-api-access-lw72n") pod "efa5fcb4-4586-420d-a621-076ae4c7cf86" (UID: "efa5fcb4-4586-420d-a621-076ae4c7cf86"). InnerVolumeSpecName "kube-api-access-lw72n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.619904 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af42db6-6320-4a09-81c8-8c4af6149632-kube-api-access-fs65k" (OuterVolumeSpecName: "kube-api-access-fs65k") pod "3af42db6-6320-4a09-81c8-8c4af6149632" (UID: "3af42db6-6320-4a09-81c8-8c4af6149632"). InnerVolumeSpecName "kube-api-access-fs65k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.620023 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af42db6-6320-4a09-81c8-8c4af6149632-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3af42db6-6320-4a09-81c8-8c4af6149632" (UID: "3af42db6-6320-4a09-81c8-8c4af6149632"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.704914 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335910da-4350-46de-b33b-3169d304c7f9-serving-cert\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705000 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9zft\" (UniqueName: \"kubernetes.io/projected/335910da-4350-46de-b33b-3169d304c7f9-kube-api-access-w9zft\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705030 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-config\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705175 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-client-ca\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705237 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705252 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw72n\" (UniqueName: \"kubernetes.io/projected/efa5fcb4-4586-420d-a621-076ae4c7cf86-kube-api-access-lw72n\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705266 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705278 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs65k\" (UniqueName: \"kubernetes.io/projected/3af42db6-6320-4a09-81c8-8c4af6149632-kube-api-access-fs65k\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705291 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705303 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa5fcb4-4586-420d-a621-076ae4c7cf86-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705314 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efa5fcb4-4586-420d-a621-076ae4c7cf86-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705325 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3af42db6-6320-4a09-81c8-8c4af6149632-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.705335 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3af42db6-6320-4a09-81c8-8c4af6149632-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.806496 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-client-ca\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.806558 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335910da-4350-46de-b33b-3169d304c7f9-serving-cert\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.806594 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9zft\" (UniqueName: \"kubernetes.io/projected/335910da-4350-46de-b33b-3169d304c7f9-kube-api-access-w9zft\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.806626 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-config\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.807960 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-config\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.808557 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-client-ca\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.813475 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335910da-4350-46de-b33b-3169d304c7f9-serving-cert\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.825428 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9zft\" (UniqueName: \"kubernetes.io/projected/335910da-4350-46de-b33b-3169d304c7f9-kube-api-access-w9zft\") pod \"route-controller-manager-7dd98b5c79-pvc7n\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.849348 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9478"] Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.849670 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z9478" podUID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerName="registry-server" containerID="cri-o://f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1" gracePeriod=2 Feb 04 11:31:07 crc kubenswrapper[4728]: I0204 11:31:07.892635 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.287099 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.341039 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n"] Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.354227 4728 generic.go:334] "Generic (PLEG): container finished" podID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerID="f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1" exitCode=0 Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.354292 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9478" event={"ID":"f7fac84c-a087-48c4-8545-c4eef1dc364b","Type":"ContainerDied","Data":"f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1"} Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.354318 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9478" event={"ID":"f7fac84c-a087-48c4-8545-c4eef1dc364b","Type":"ContainerDied","Data":"0f02ca77d601f8a2eec9f9ec31447123ec3f7f97b389cae95983aa8b1c57b2bd"} Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.354334 4728 scope.go:117] "RemoveContainer" containerID="f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.354445 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9478" Feb 04 11:31:08 crc kubenswrapper[4728]: W0204 11:31:08.357784 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod335910da_4350_46de_b33b_3169d304c7f9.slice/crio-0bf2c809f73adb08faa361c66a75d143a1204941120f986fd0f69273bc3c993f WatchSource:0}: Error finding container 0bf2c809f73adb08faa361c66a75d143a1204941120f986fd0f69273bc3c993f: Status 404 returned error can't find the container with id 0bf2c809f73adb08faa361c66a75d143a1204941120f986fd0f69273bc3c993f Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.358391 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" event={"ID":"3af42db6-6320-4a09-81c8-8c4af6149632","Type":"ContainerDied","Data":"df06e9b21160bc79d0dd967999a44d4718418dc992a93104eb40959f27b829d2"} Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.358491 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65c95f485d-bgxch" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.384463 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" event={"ID":"efa5fcb4-4586-420d-a621-076ae4c7cf86","Type":"ContainerDied","Data":"d1a34ca5aba0dbed8748d900a6aa46257458ed7c5d4ff65058d3ac6d5071f77f"} Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.384511 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.412131 4728 scope.go:117] "RemoveContainer" containerID="845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.416493 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-catalog-content\") pod \"f7fac84c-a087-48c4-8545-c4eef1dc364b\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.416628 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p57tf\" (UniqueName: \"kubernetes.io/projected/f7fac84c-a087-48c4-8545-c4eef1dc364b-kube-api-access-p57tf\") pod \"f7fac84c-a087-48c4-8545-c4eef1dc364b\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.416660 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-utilities\") pod \"f7fac84c-a087-48c4-8545-c4eef1dc364b\" (UID: \"f7fac84c-a087-48c4-8545-c4eef1dc364b\") " Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.418211 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-utilities" (OuterVolumeSpecName: "utilities") pod "f7fac84c-a087-48c4-8545-c4eef1dc364b" (UID: "f7fac84c-a087-48c4-8545-c4eef1dc364b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.433606 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65c95f485d-bgxch"] Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.437575 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65c95f485d-bgxch"] Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.441559 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fac84c-a087-48c4-8545-c4eef1dc364b-kube-api-access-p57tf" (OuterVolumeSpecName: "kube-api-access-p57tf") pod "f7fac84c-a087-48c4-8545-c4eef1dc364b" (UID: "f7fac84c-a087-48c4-8545-c4eef1dc364b"). InnerVolumeSpecName "kube-api-access-p57tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.445400 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6"] Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.451694 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77c7c86cd-xjxp6"] Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.458548 4728 scope.go:117] "RemoveContainer" containerID="316b103e2e93171e2c4891d1155199ff6e035a64de40327d133872e2df9457b3" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.516860 4728 scope.go:117] "RemoveContainer" containerID="f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1" Feb 04 11:31:08 crc kubenswrapper[4728]: E0204 11:31:08.518163 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1\": container with ID starting with f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1 not found: ID does not exist" containerID="f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.518209 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1"} err="failed to get container status \"f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1\": rpc error: code = NotFound desc = could not find container \"f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1\": container with ID starting with f1e5318a46512fcff895a1c8f264ca3b905509f41f0a4c083cea1359d16ce8f1 not found: ID does not exist" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.518239 4728 scope.go:117] "RemoveContainer" containerID="845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8" Feb 04 11:31:08 crc kubenswrapper[4728]: E0204 11:31:08.518489 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8\": container with ID starting with 845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8 not found: ID does not exist" containerID="845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.518514 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8"} err="failed to get container status \"845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8\": rpc error: code = NotFound desc = could not find container \"845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8\": container with ID starting with 845f008b79f1fcbfaad6ed1b413cba3f81121c07403f07f20c1c8b3b591e9eb8 not found: ID does not exist" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.518527 4728 scope.go:117] "RemoveContainer" containerID="316b103e2e93171e2c4891d1155199ff6e035a64de40327d133872e2df9457b3" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.518668 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p57tf\" (UniqueName: \"kubernetes.io/projected/f7fac84c-a087-48c4-8545-c4eef1dc364b-kube-api-access-p57tf\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.518705 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:08 crc kubenswrapper[4728]: E0204 11:31:08.518717 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"316b103e2e93171e2c4891d1155199ff6e035a64de40327d133872e2df9457b3\": container with ID starting with 316b103e2e93171e2c4891d1155199ff6e035a64de40327d133872e2df9457b3 not found: ID does not exist" containerID="316b103e2e93171e2c4891d1155199ff6e035a64de40327d133872e2df9457b3" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.518828 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"316b103e2e93171e2c4891d1155199ff6e035a64de40327d133872e2df9457b3"} err="failed to get container status \"316b103e2e93171e2c4891d1155199ff6e035a64de40327d133872e2df9457b3\": rpc error: code = NotFound desc = could not find container \"316b103e2e93171e2c4891d1155199ff6e035a64de40327d133872e2df9457b3\": container with ID starting with 316b103e2e93171e2c4891d1155199ff6e035a64de40327d133872e2df9457b3 not found: ID does not exist" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.518901 4728 scope.go:117] "RemoveContainer" containerID="be2ed7549686f6e8f37494e214df16d104ab12db64812bb02b872039595575d9" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.585123 4728 scope.go:117] "RemoveContainer" containerID="8aea87d06b1ed666b242ee4e0e03dbd2676587dc95ed71a9994621aba0014a8e" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.608866 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7fac84c-a087-48c4-8545-c4eef1dc364b" (UID: "f7fac84c-a087-48c4-8545-c4eef1dc364b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.619737 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7fac84c-a087-48c4-8545-c4eef1dc364b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.678096 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9478"] Feb 04 11:31:08 crc kubenswrapper[4728]: I0204 11:31:08.680643 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z9478"] Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.390728 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" event={"ID":"335910da-4350-46de-b33b-3169d304c7f9","Type":"ContainerStarted","Data":"d325bf425f3e0a82a14fc0af6e41d65b58f54c35ab7a9e23b66dc27230528e91"} Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.390783 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" event={"ID":"335910da-4350-46de-b33b-3169d304c7f9","Type":"ContainerStarted","Data":"0bf2c809f73adb08faa361c66a75d143a1204941120f986fd0f69273bc3c993f"} Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.391138 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.398960 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r26n" event={"ID":"81d54708-f68a-4e0b-b8e4-699a15e89f03","Type":"ContainerStarted","Data":"f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f"} Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.415689 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" podStartSLOduration=3.415664683 podStartE2EDuration="3.415664683s" podCreationTimestamp="2026-02-04 11:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:31:09.41328585 +0000 UTC m=+218.555990275" watchObservedRunningTime="2026-02-04 11:31:09.415664683 +0000 UTC m=+218.558369088" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.457908 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.559044 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af42db6-6320-4a09-81c8-8c4af6149632" path="/var/lib/kubelet/pods/3af42db6-6320-4a09-81c8-8c4af6149632/volumes" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.559917 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa5fcb4-4586-420d-a621-076ae4c7cf86" path="/var/lib/kubelet/pods/efa5fcb4-4586-420d-a621-076ae4c7cf86/volumes" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.560485 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7fac84c-a087-48c4-8545-c4eef1dc364b" path="/var/lib/kubelet/pods/f7fac84c-a087-48c4-8545-c4eef1dc364b/volumes" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.933490 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bfc585969-kq4sr"] Feb 04 11:31:09 crc kubenswrapper[4728]: E0204 11:31:09.934320 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerName="extract-content" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.934355 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerName="extract-content" Feb 04 11:31:09 crc kubenswrapper[4728]: E0204 11:31:09.934395 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerName="extract-utilities" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.934416 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerName="extract-utilities" Feb 04 11:31:09 crc kubenswrapper[4728]: E0204 11:31:09.934443 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerName="registry-server" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.934460 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerName="registry-server" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.934807 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fac84c-a087-48c4-8545-c4eef1dc364b" containerName="registry-server" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.935655 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.938468 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bfc585969-kq4sr"] Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.940268 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.940288 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.945289 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.945325 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.945441 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.946899 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 04 11:31:09 crc kubenswrapper[4728]: I0204 11:31:09.960201 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.035822 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr92c\" (UniqueName: \"kubernetes.io/projected/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-kube-api-access-vr92c\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.035876 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-client-ca\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.035926 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-serving-cert\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.035947 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-config\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.035966 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-proxy-ca-bundles\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.137292 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr92c\" (UniqueName: \"kubernetes.io/projected/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-kube-api-access-vr92c\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.137340 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-client-ca\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.137403 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-serving-cert\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.137434 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-config\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.137463 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-proxy-ca-bundles\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.138670 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-client-ca\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.138726 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-proxy-ca-bundles\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.138800 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.138839 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.139231 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-config\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.148306 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-serving-cert\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.165355 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr92c\" (UniqueName: \"kubernetes.io/projected/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-kube-api-access-vr92c\") pod \"controller-manager-bfc585969-kq4sr\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.185704 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.254822 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.324672 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.324831 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.398001 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.432105 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4r26n" podStartSLOduration=4.91037013 podStartE2EDuration="58.43208908s" podCreationTimestamp="2026-02-04 11:30:12 +0000 UTC" firstStartedPulling="2026-02-04 11:30:14.79097724 +0000 UTC m=+163.933681625" lastFinishedPulling="2026-02-04 11:31:08.31269619 +0000 UTC m=+217.455400575" observedRunningTime="2026-02-04 11:31:10.431532323 +0000 UTC m=+219.574236708" watchObservedRunningTime="2026-02-04 11:31:10.43208908 +0000 UTC m=+219.574793465" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.504444 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.505943 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.525358 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bfc585969-kq4sr"] Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.612834 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.663430 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.766000 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.766062 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:31:10 crc kubenswrapper[4728]: I0204 11:31:10.814157 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:31:11 crc kubenswrapper[4728]: I0204 11:31:11.414673 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" event={"ID":"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c","Type":"ContainerStarted","Data":"51f4cf793ddc9bb801c4aecaeef9c1599d389983d83ec4452b73ace2ea47180e"} Feb 04 11:31:11 crc kubenswrapper[4728]: I0204 11:31:11.414763 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" event={"ID":"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c","Type":"ContainerStarted","Data":"d61cf83cfea10f8a80c8e31ef6b3131f7a667eaf0295c4ba61cb9189c8339c5f"} Feb 04 11:31:11 crc kubenswrapper[4728]: I0204 11:31:11.434723 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" podStartSLOduration=6.434702042 podStartE2EDuration="6.434702042s" podCreationTimestamp="2026-02-04 11:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:31:11.431340549 +0000 UTC m=+220.574044954" watchObservedRunningTime="2026-02-04 11:31:11.434702042 +0000 UTC m=+220.577406427" Feb 04 11:31:11 crc kubenswrapper[4728]: I0204 11:31:11.474908 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:31:12 crc kubenswrapper[4728]: I0204 11:31:12.327514 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:31:12 crc kubenswrapper[4728]: I0204 11:31:12.328330 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:31:12 crc kubenswrapper[4728]: I0204 11:31:12.370998 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:31:12 crc kubenswrapper[4728]: I0204 11:31:12.420443 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:12 crc kubenswrapper[4728]: I0204 11:31:12.424371 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:12 crc kubenswrapper[4728]: I0204 11:31:12.450201 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9ct6"] Feb 04 11:31:12 crc kubenswrapper[4728]: I0204 11:31:12.450490 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g9ct6" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" containerName="registry-server" containerID="cri-o://982bed018f143b767f9419cf57903de9929d1244f891181df3a1a3c901801901" gracePeriod=2 Feb 04 11:31:12 crc kubenswrapper[4728]: I0204 11:31:12.470917 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:31:12 crc kubenswrapper[4728]: I0204 11:31:12.734441 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:31:12 crc kubenswrapper[4728]: I0204 11:31:12.734482 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:31:12 crc kubenswrapper[4728]: I0204 11:31:12.773828 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:31:13 crc kubenswrapper[4728]: I0204 11:31:13.048968 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrmfp"] Feb 04 11:31:13 crc kubenswrapper[4728]: I0204 11:31:13.340086 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:31:13 crc kubenswrapper[4728]: I0204 11:31:13.340420 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:31:13 crc kubenswrapper[4728]: I0204 11:31:13.427152 4728 generic.go:334] "Generic (PLEG): container finished" podID="c17fa247-ec01-449d-9888-ab485b1496a6" containerID="982bed018f143b767f9419cf57903de9929d1244f891181df3a1a3c901801901" exitCode=0 Feb 04 11:31:13 crc kubenswrapper[4728]: I0204 11:31:13.427203 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9ct6" event={"ID":"c17fa247-ec01-449d-9888-ab485b1496a6","Type":"ContainerDied","Data":"982bed018f143b767f9419cf57903de9929d1244f891181df3a1a3c901801901"} Feb 04 11:31:13 crc kubenswrapper[4728]: I0204 11:31:13.427994 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vrmfp" podUID="eef08543-a746-4aad-a4be-5ee0bb7464a8" containerName="registry-server" containerID="cri-o://a8e0642ee3710f1469f1046a1f1f6360a72620eb8813877239e5df09a29f9fb6" gracePeriod=2 Feb 04 11:31:13 crc kubenswrapper[4728]: I0204 11:31:13.467232 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.394192 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4r26n" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerName="registry-server" probeResult="failure" output=< Feb 04 11:31:14 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 04 11:31:14 crc kubenswrapper[4728]: > Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.394276 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.435849 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9ct6" event={"ID":"c17fa247-ec01-449d-9888-ab485b1496a6","Type":"ContainerDied","Data":"d10bc983a4d73a42e04a5aa0248ad202a45758d9f12ff8069ba28d2d5c1689e3"} Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.435868 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9ct6" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.435897 4728 scope.go:117] "RemoveContainer" containerID="982bed018f143b767f9419cf57903de9929d1244f891181df3a1a3c901801901" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.438857 4728 generic.go:334] "Generic (PLEG): container finished" podID="eef08543-a746-4aad-a4be-5ee0bb7464a8" containerID="a8e0642ee3710f1469f1046a1f1f6360a72620eb8813877239e5df09a29f9fb6" exitCode=0 Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.439017 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrmfp" event={"ID":"eef08543-a746-4aad-a4be-5ee0bb7464a8","Type":"ContainerDied","Data":"a8e0642ee3710f1469f1046a1f1f6360a72620eb8813877239e5df09a29f9fb6"} Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.457671 4728 scope.go:117] "RemoveContainer" containerID="db4fae948f050049f75cb1187649d48dc7341d141106d3c0c8e9f31830dc713f" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.476422 4728 scope.go:117] "RemoveContainer" containerID="d30dd9606a0f6e055791fb2605cb9c6c21c019d79f75b837a392e38895529982" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.492876 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-catalog-content\") pod \"c17fa247-ec01-449d-9888-ab485b1496a6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.493114 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-utilities\") pod \"c17fa247-ec01-449d-9888-ab485b1496a6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.493188 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5lpp\" (UniqueName: \"kubernetes.io/projected/c17fa247-ec01-449d-9888-ab485b1496a6-kube-api-access-p5lpp\") pod \"c17fa247-ec01-449d-9888-ab485b1496a6\" (UID: \"c17fa247-ec01-449d-9888-ab485b1496a6\") " Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.494199 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-utilities" (OuterVolumeSpecName: "utilities") pod "c17fa247-ec01-449d-9888-ab485b1496a6" (UID: "c17fa247-ec01-449d-9888-ab485b1496a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.495785 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.502286 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17fa247-ec01-449d-9888-ab485b1496a6-kube-api-access-p5lpp" (OuterVolumeSpecName: "kube-api-access-p5lpp") pod "c17fa247-ec01-449d-9888-ab485b1496a6" (UID: "c17fa247-ec01-449d-9888-ab485b1496a6"). InnerVolumeSpecName "kube-api-access-p5lpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.550015 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c17fa247-ec01-449d-9888-ab485b1496a6" (UID: "c17fa247-ec01-449d-9888-ab485b1496a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.597407 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c17fa247-ec01-449d-9888-ab485b1496a6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.597441 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5lpp\" (UniqueName: \"kubernetes.io/projected/c17fa247-ec01-449d-9888-ab485b1496a6-kube-api-access-p5lpp\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.763194 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9ct6"] Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.774316 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g9ct6"] Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.807327 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.902935 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-catalog-content\") pod \"eef08543-a746-4aad-a4be-5ee0bb7464a8\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.902993 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6twpj\" (UniqueName: \"kubernetes.io/projected/eef08543-a746-4aad-a4be-5ee0bb7464a8-kube-api-access-6twpj\") pod \"eef08543-a746-4aad-a4be-5ee0bb7464a8\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.903073 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-utilities\") pod \"eef08543-a746-4aad-a4be-5ee0bb7464a8\" (UID: \"eef08543-a746-4aad-a4be-5ee0bb7464a8\") " Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.904213 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-utilities" (OuterVolumeSpecName: "utilities") pod "eef08543-a746-4aad-a4be-5ee0bb7464a8" (UID: "eef08543-a746-4aad-a4be-5ee0bb7464a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:31:14 crc kubenswrapper[4728]: I0204 11:31:14.906997 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef08543-a746-4aad-a4be-5ee0bb7464a8-kube-api-access-6twpj" (OuterVolumeSpecName: "kube-api-access-6twpj") pod "eef08543-a746-4aad-a4be-5ee0bb7464a8" (UID: "eef08543-a746-4aad-a4be-5ee0bb7464a8"). InnerVolumeSpecName "kube-api-access-6twpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.004438 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6twpj\" (UniqueName: \"kubernetes.io/projected/eef08543-a746-4aad-a4be-5ee0bb7464a8-kube-api-access-6twpj\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.004482 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.325071 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eef08543-a746-4aad-a4be-5ee0bb7464a8" (UID: "eef08543-a746-4aad-a4be-5ee0bb7464a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.411809 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef08543-a746-4aad-a4be-5ee0bb7464a8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.451445 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgxgn"] Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.456970 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bgxgn" podUID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" containerName="registry-server" containerID="cri-o://07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b" gracePeriod=2 Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.457321 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrmfp" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.457864 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrmfp" event={"ID":"eef08543-a746-4aad-a4be-5ee0bb7464a8","Type":"ContainerDied","Data":"941c3c89e007e76f140ec70d86d1db10c974dd815cd11385a35b80b64624f374"} Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.457921 4728 scope.go:117] "RemoveContainer" containerID="a8e0642ee3710f1469f1046a1f1f6360a72620eb8813877239e5df09a29f9fb6" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.476114 4728 scope.go:117] "RemoveContainer" containerID="806a2e250635702fb48d72288289d7eb14ddbfad6f9470121aa04826bb8c67eb" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.493243 4728 scope.go:117] "RemoveContainer" containerID="f55bb8f923f25a4e5708acd07bf9892670262ccc30d3d7b999df9bf38b5a51ee" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.496476 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrmfp"] Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.503185 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vrmfp"] Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.564699 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" path="/var/lib/kubelet/pods/c17fa247-ec01-449d-9888-ab485b1496a6/volumes" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.565946 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef08543-a746-4aad-a4be-5ee0bb7464a8" path="/var/lib/kubelet/pods/eef08543-a746-4aad-a4be-5ee0bb7464a8/volumes" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.857044 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.919911 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-utilities\") pod \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.920013 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjthf\" (UniqueName: \"kubernetes.io/projected/8c5be741-eb53-486a-8af4-1e0b4974ddb7-kube-api-access-qjthf\") pod \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.920089 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-catalog-content\") pod \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\" (UID: \"8c5be741-eb53-486a-8af4-1e0b4974ddb7\") " Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.920963 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-utilities" (OuterVolumeSpecName: "utilities") pod "8c5be741-eb53-486a-8af4-1e0b4974ddb7" (UID: "8c5be741-eb53-486a-8af4-1e0b4974ddb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.924907 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5be741-eb53-486a-8af4-1e0b4974ddb7-kube-api-access-qjthf" (OuterVolumeSpecName: "kube-api-access-qjthf") pod "8c5be741-eb53-486a-8af4-1e0b4974ddb7" (UID: "8c5be741-eb53-486a-8af4-1e0b4974ddb7"). InnerVolumeSpecName "kube-api-access-qjthf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:31:15 crc kubenswrapper[4728]: I0204 11:31:15.945189 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c5be741-eb53-486a-8af4-1e0b4974ddb7" (UID: "8c5be741-eb53-486a-8af4-1e0b4974ddb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.021150 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjthf\" (UniqueName: \"kubernetes.io/projected/8c5be741-eb53-486a-8af4-1e0b4974ddb7-kube-api-access-qjthf\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.021185 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.021194 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c5be741-eb53-486a-8af4-1e0b4974ddb7-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.466246 4728 generic.go:334] "Generic (PLEG): container finished" podID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" containerID="07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b" exitCode=0 Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.466359 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgxgn" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.466385 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgxgn" event={"ID":"8c5be741-eb53-486a-8af4-1e0b4974ddb7","Type":"ContainerDied","Data":"07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b"} Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.466765 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgxgn" event={"ID":"8c5be741-eb53-486a-8af4-1e0b4974ddb7","Type":"ContainerDied","Data":"e093c81057eaef107e4a0f66a2b6f4adff2f4d747f8f46af29c81f9542806231"} Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.466795 4728 scope.go:117] "RemoveContainer" containerID="07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.494149 4728 scope.go:117] "RemoveContainer" containerID="d27816492f6aabeeb239f78ce43f49e69078c5ee7befbfb15fb7879f0e46ff53" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.499708 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgxgn"] Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.504378 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgxgn"] Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.530503 4728 scope.go:117] "RemoveContainer" containerID="9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.543480 4728 scope.go:117] "RemoveContainer" containerID="07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b" Feb 04 11:31:16 crc kubenswrapper[4728]: E0204 11:31:16.544236 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b\": container with ID starting with 07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b not found: ID does not exist" containerID="07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.544270 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b"} err="failed to get container status \"07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b\": rpc error: code = NotFound desc = could not find container \"07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b\": container with ID starting with 07843afb1faea9e3d14c412c95e6311a4ea15dd22b4ca1610a09116c982b302b not found: ID does not exist" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.544296 4728 scope.go:117] "RemoveContainer" containerID="d27816492f6aabeeb239f78ce43f49e69078c5ee7befbfb15fb7879f0e46ff53" Feb 04 11:31:16 crc kubenswrapper[4728]: E0204 11:31:16.545167 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d27816492f6aabeeb239f78ce43f49e69078c5ee7befbfb15fb7879f0e46ff53\": container with ID starting with d27816492f6aabeeb239f78ce43f49e69078c5ee7befbfb15fb7879f0e46ff53 not found: ID does not exist" containerID="d27816492f6aabeeb239f78ce43f49e69078c5ee7befbfb15fb7879f0e46ff53" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.545223 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27816492f6aabeeb239f78ce43f49e69078c5ee7befbfb15fb7879f0e46ff53"} err="failed to get container status \"d27816492f6aabeeb239f78ce43f49e69078c5ee7befbfb15fb7879f0e46ff53\": rpc error: code = NotFound desc = could not find container \"d27816492f6aabeeb239f78ce43f49e69078c5ee7befbfb15fb7879f0e46ff53\": container with ID starting with d27816492f6aabeeb239f78ce43f49e69078c5ee7befbfb15fb7879f0e46ff53 not found: ID does not exist" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.545255 4728 scope.go:117] "RemoveContainer" containerID="9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48" Feb 04 11:31:16 crc kubenswrapper[4728]: E0204 11:31:16.545938 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48\": container with ID starting with 9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48 not found: ID does not exist" containerID="9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48" Feb 04 11:31:16 crc kubenswrapper[4728]: I0204 11:31:16.546022 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48"} err="failed to get container status \"9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48\": rpc error: code = NotFound desc = could not find container \"9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48\": container with ID starting with 9d73392aece917872fe43b062dd0a31f123b7cf86c182328c57d798471aabd48 not found: ID does not exist" Feb 04 11:31:17 crc kubenswrapper[4728]: I0204 11:31:17.560592 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" path="/var/lib/kubelet/pods/8c5be741-eb53-486a-8af4-1e0b4974ddb7/volumes" Feb 04 11:31:23 crc kubenswrapper[4728]: I0204 11:31:23.384512 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:31:23 crc kubenswrapper[4728]: I0204 11:31:23.443272 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:31:25 crc kubenswrapper[4728]: I0204 11:31:25.938349 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bfc585969-kq4sr"] Feb 04 11:31:25 crc kubenswrapper[4728]: I0204 11:31:25.938585 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" podUID="3d4047fd-fd5a-4b5e-b64f-4afda6032c4c" containerName="controller-manager" containerID="cri-o://51f4cf793ddc9bb801c4aecaeef9c1599d389983d83ec4452b73ace2ea47180e" gracePeriod=30 Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.022385 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n"] Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.022604 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" podUID="335910da-4350-46de-b33b-3169d304c7f9" containerName="route-controller-manager" containerID="cri-o://d325bf425f3e0a82a14fc0af6e41d65b58f54c35ab7a9e23b66dc27230528e91" gracePeriod=30 Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.393065 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" podUID="87039d42-443e-40f7-abe1-a6462556cc32" containerName="oauth-openshift" containerID="cri-o://2182e88d72c5a8b19bed54b7941a9aed00a02b7b54a3838cdd292394be7499ea" gracePeriod=15 Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.528834 4728 generic.go:334] "Generic (PLEG): container finished" podID="87039d42-443e-40f7-abe1-a6462556cc32" containerID="2182e88d72c5a8b19bed54b7941a9aed00a02b7b54a3838cdd292394be7499ea" exitCode=0 Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.528889 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" event={"ID":"87039d42-443e-40f7-abe1-a6462556cc32","Type":"ContainerDied","Data":"2182e88d72c5a8b19bed54b7941a9aed00a02b7b54a3838cdd292394be7499ea"} Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.530225 4728 generic.go:334] "Generic (PLEG): container finished" podID="3d4047fd-fd5a-4b5e-b64f-4afda6032c4c" containerID="51f4cf793ddc9bb801c4aecaeef9c1599d389983d83ec4452b73ace2ea47180e" exitCode=0 Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.530280 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" event={"ID":"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c","Type":"ContainerDied","Data":"51f4cf793ddc9bb801c4aecaeef9c1599d389983d83ec4452b73ace2ea47180e"} Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.530304 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" event={"ID":"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c","Type":"ContainerDied","Data":"d61cf83cfea10f8a80c8e31ef6b3131f7a667eaf0295c4ba61cb9189c8339c5f"} Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.530315 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d61cf83cfea10f8a80c8e31ef6b3131f7a667eaf0295c4ba61cb9189c8339c5f" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.531427 4728 generic.go:334] "Generic (PLEG): container finished" podID="335910da-4350-46de-b33b-3169d304c7f9" containerID="d325bf425f3e0a82a14fc0af6e41d65b58f54c35ab7a9e23b66dc27230528e91" exitCode=0 Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.531454 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" event={"ID":"335910da-4350-46de-b33b-3169d304c7f9","Type":"ContainerDied","Data":"d325bf425f3e0a82a14fc0af6e41d65b58f54c35ab7a9e23b66dc27230528e91"} Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.531474 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" event={"ID":"335910da-4350-46de-b33b-3169d304c7f9","Type":"ContainerDied","Data":"0bf2c809f73adb08faa361c66a75d143a1204941120f986fd0f69273bc3c993f"} Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.531485 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bf2c809f73adb08faa361c66a75d143a1204941120f986fd0f69273bc3c993f" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.551666 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.559850 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.611419 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-proxy-ca-bundles\") pod \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.611479 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-config\") pod \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.611508 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-client-ca\") pod \"335910da-4350-46de-b33b-3169d304c7f9\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.611530 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-serving-cert\") pod \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.611600 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr92c\" (UniqueName: \"kubernetes.io/projected/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-kube-api-access-vr92c\") pod \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.611631 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9zft\" (UniqueName: \"kubernetes.io/projected/335910da-4350-46de-b33b-3169d304c7f9-kube-api-access-w9zft\") pod \"335910da-4350-46de-b33b-3169d304c7f9\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.611654 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335910da-4350-46de-b33b-3169d304c7f9-serving-cert\") pod \"335910da-4350-46de-b33b-3169d304c7f9\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.611685 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-client-ca\") pod \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\" (UID: \"3d4047fd-fd5a-4b5e-b64f-4afda6032c4c\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.611705 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-config\") pod \"335910da-4350-46de-b33b-3169d304c7f9\" (UID: \"335910da-4350-46de-b33b-3169d304c7f9\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.613714 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3d4047fd-fd5a-4b5e-b64f-4afda6032c4c" (UID: "3d4047fd-fd5a-4b5e-b64f-4afda6032c4c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.614790 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d4047fd-fd5a-4b5e-b64f-4afda6032c4c" (UID: "3d4047fd-fd5a-4b5e-b64f-4afda6032c4c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.615725 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-client-ca" (OuterVolumeSpecName: "client-ca") pod "335910da-4350-46de-b33b-3169d304c7f9" (UID: "335910da-4350-46de-b33b-3169d304c7f9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.616279 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-config" (OuterVolumeSpecName: "config") pod "3d4047fd-fd5a-4b5e-b64f-4afda6032c4c" (UID: "3d4047fd-fd5a-4b5e-b64f-4afda6032c4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.618203 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-config" (OuterVolumeSpecName: "config") pod "335910da-4350-46de-b33b-3169d304c7f9" (UID: "335910da-4350-46de-b33b-3169d304c7f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.628726 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/335910da-4350-46de-b33b-3169d304c7f9-kube-api-access-w9zft" (OuterVolumeSpecName: "kube-api-access-w9zft") pod "335910da-4350-46de-b33b-3169d304c7f9" (UID: "335910da-4350-46de-b33b-3169d304c7f9"). InnerVolumeSpecName "kube-api-access-w9zft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.640918 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/335910da-4350-46de-b33b-3169d304c7f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "335910da-4350-46de-b33b-3169d304c7f9" (UID: "335910da-4350-46de-b33b-3169d304c7f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.641002 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d4047fd-fd5a-4b5e-b64f-4afda6032c4c" (UID: "3d4047fd-fd5a-4b5e-b64f-4afda6032c4c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.641221 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-kube-api-access-vr92c" (OuterVolumeSpecName: "kube-api-access-vr92c") pod "3d4047fd-fd5a-4b5e-b64f-4afda6032c4c" (UID: "3d4047fd-fd5a-4b5e-b64f-4afda6032c4c"). InnerVolumeSpecName "kube-api-access-vr92c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.712521 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.712562 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.712575 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.712587 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.712597 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr92c\" (UniqueName: \"kubernetes.io/projected/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-kube-api-access-vr92c\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.712609 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9zft\" (UniqueName: \"kubernetes.io/projected/335910da-4350-46de-b33b-3169d304c7f9-kube-api-access-w9zft\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.712621 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335910da-4350-46de-b33b-3169d304c7f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.712631 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.712641 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335910da-4350-46de-b33b-3169d304c7f9-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.811719 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914379 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-service-ca\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914443 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-audit-policies\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914496 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-idp-0-file-data\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914523 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-error\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914563 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-provider-selection\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914588 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-serving-cert\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914614 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87039d42-443e-40f7-abe1-a6462556cc32-audit-dir\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914651 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-session\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914680 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-trusted-ca-bundle\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914706 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-ocp-branding-template\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914783 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-cliconfig\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914817 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77hxv\" (UniqueName: \"kubernetes.io/projected/87039d42-443e-40f7-abe1-a6462556cc32-kube-api-access-77hxv\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914871 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-router-certs\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.914902 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-login\") pod \"87039d42-443e-40f7-abe1-a6462556cc32\" (UID: \"87039d42-443e-40f7-abe1-a6462556cc32\") " Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.915039 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.915111 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.915161 4728 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.915362 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87039d42-443e-40f7-abe1-a6462556cc32-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.915991 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.916135 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.918857 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.919079 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.919521 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87039d42-443e-40f7-abe1-a6462556cc32-kube-api-access-77hxv" (OuterVolumeSpecName: "kube-api-access-77hxv") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "kube-api-access-77hxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.919503 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.919658 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.919913 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.920259 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.920440 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:26 crc kubenswrapper[4728]: I0204 11:31:26.920806 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "87039d42-443e-40f7-abe1-a6462556cc32" (UID: "87039d42-443e-40f7-abe1-a6462556cc32"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016312 4728 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87039d42-443e-40f7-abe1-a6462556cc32-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016348 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016365 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016379 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016393 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016406 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77hxv\" (UniqueName: \"kubernetes.io/projected/87039d42-443e-40f7-abe1-a6462556cc32-kube-api-access-77hxv\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016419 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016432 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016444 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016458 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016470 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016483 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.016496 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87039d42-443e-40f7-abe1-a6462556cc32-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.538410 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" event={"ID":"87039d42-443e-40f7-abe1-a6462556cc32","Type":"ContainerDied","Data":"01f6e75f4885f5bbeb3814fb705eedc3b4decaf7071f25ff3b6cfe8a88a9567f"} Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.538470 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.538488 4728 scope.go:117] "RemoveContainer" containerID="2182e88d72c5a8b19bed54b7941a9aed00a02b7b54a3838cdd292394be7499ea" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.538519 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cmjx5" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.538541 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfc585969-kq4sr" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.573695 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n"] Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.575512 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd98b5c79-pvc7n"] Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.587070 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cmjx5"] Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.600256 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cmjx5"] Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.605471 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bfc585969-kq4sr"] Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.610715 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bfc585969-kq4sr"] Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946233 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb"] Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946581 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87039d42-443e-40f7-abe1-a6462556cc32" containerName="oauth-openshift" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946596 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="87039d42-443e-40f7-abe1-a6462556cc32" containerName="oauth-openshift" Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946606 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" containerName="extract-content" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946612 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" containerName="extract-content" Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946619 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef08543-a746-4aad-a4be-5ee0bb7464a8" containerName="extract-utilities" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946625 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef08543-a746-4aad-a4be-5ee0bb7464a8" containerName="extract-utilities" Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946634 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef08543-a746-4aad-a4be-5ee0bb7464a8" containerName="extract-content" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946647 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef08543-a746-4aad-a4be-5ee0bb7464a8" containerName="extract-content" Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946671 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef08543-a746-4aad-a4be-5ee0bb7464a8" containerName="registry-server" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946678 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef08543-a746-4aad-a4be-5ee0bb7464a8" containerName="registry-server" Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946687 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" containerName="extract-utilities" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946692 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" containerName="extract-utilities" Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946704 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" containerName="extract-content" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946709 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" containerName="extract-content" Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946719 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4047fd-fd5a-4b5e-b64f-4afda6032c4c" containerName="controller-manager" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946725 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4047fd-fd5a-4b5e-b64f-4afda6032c4c" containerName="controller-manager" Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946734 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" containerName="extract-utilities" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946740 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" containerName="extract-utilities" Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946770 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" containerName="registry-server" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946777 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" containerName="registry-server" Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946790 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" containerName="registry-server" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946798 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" containerName="registry-server" Feb 04 11:31:27 crc kubenswrapper[4728]: E0204 11:31:27.946808 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335910da-4350-46de-b33b-3169d304c7f9" containerName="route-controller-manager" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946815 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="335910da-4350-46de-b33b-3169d304c7f9" containerName="route-controller-manager" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946921 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4047fd-fd5a-4b5e-b64f-4afda6032c4c" containerName="controller-manager" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946931 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="335910da-4350-46de-b33b-3169d304c7f9" containerName="route-controller-manager" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946938 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef08543-a746-4aad-a4be-5ee0bb7464a8" containerName="registry-server" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946948 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c17fa247-ec01-449d-9888-ab485b1496a6" containerName="registry-server" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946957 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5be741-eb53-486a-8af4-1e0b4974ddb7" containerName="registry-server" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.946964 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="87039d42-443e-40f7-abe1-a6462556cc32" containerName="oauth-openshift" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.947345 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.949825 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c647f6468-9tn4b"] Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.950604 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.956935 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.957172 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.957190 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.958498 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.962059 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c647f6468-9tn4b"] Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.962212 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.964153 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.964399 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.964500 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.964650 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.964666 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.964882 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.965012 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.966054 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb"] Feb 04 11:31:27 crc kubenswrapper[4728]: I0204 11:31:27.969167 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.027587 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpblh\" (UniqueName: \"kubernetes.io/projected/fceffe76-69ff-4267-b556-b35e474d1178-kube-api-access-mpblh\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.027648 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/977577fb-4fd4-4dd9-863a-69d13ee523e5-client-ca\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.027734 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/977577fb-4fd4-4dd9-863a-69d13ee523e5-serving-cert\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.027800 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fceffe76-69ff-4267-b556-b35e474d1178-config\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.027821 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4c7m\" (UniqueName: \"kubernetes.io/projected/977577fb-4fd4-4dd9-863a-69d13ee523e5-kube-api-access-x4c7m\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.027896 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977577fb-4fd4-4dd9-863a-69d13ee523e5-config\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.027913 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fceffe76-69ff-4267-b556-b35e474d1178-client-ca\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.027938 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fceffe76-69ff-4267-b556-b35e474d1178-proxy-ca-bundles\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.027952 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fceffe76-69ff-4267-b556-b35e474d1178-serving-cert\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.129145 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fceffe76-69ff-4267-b556-b35e474d1178-config\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.129221 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4c7m\" (UniqueName: \"kubernetes.io/projected/977577fb-4fd4-4dd9-863a-69d13ee523e5-kube-api-access-x4c7m\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.129331 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977577fb-4fd4-4dd9-863a-69d13ee523e5-config\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.129362 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fceffe76-69ff-4267-b556-b35e474d1178-client-ca\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.129415 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fceffe76-69ff-4267-b556-b35e474d1178-proxy-ca-bundles\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.129455 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fceffe76-69ff-4267-b556-b35e474d1178-serving-cert\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.129497 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpblh\" (UniqueName: \"kubernetes.io/projected/fceffe76-69ff-4267-b556-b35e474d1178-kube-api-access-mpblh\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.129536 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/977577fb-4fd4-4dd9-863a-69d13ee523e5-client-ca\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.129567 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/977577fb-4fd4-4dd9-863a-69d13ee523e5-serving-cert\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.130691 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fceffe76-69ff-4267-b556-b35e474d1178-config\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.132170 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fceffe76-69ff-4267-b556-b35e474d1178-proxy-ca-bundles\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.132353 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/977577fb-4fd4-4dd9-863a-69d13ee523e5-client-ca\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.132385 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977577fb-4fd4-4dd9-863a-69d13ee523e5-config\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.132661 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fceffe76-69ff-4267-b556-b35e474d1178-client-ca\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.134236 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fceffe76-69ff-4267-b556-b35e474d1178-serving-cert\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.134349 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/977577fb-4fd4-4dd9-863a-69d13ee523e5-serving-cert\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.154829 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4c7m\" (UniqueName: \"kubernetes.io/projected/977577fb-4fd4-4dd9-863a-69d13ee523e5-kube-api-access-x4c7m\") pod \"route-controller-manager-5d545fbdfb-q95sb\" (UID: \"977577fb-4fd4-4dd9-863a-69d13ee523e5\") " pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.154886 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpblh\" (UniqueName: \"kubernetes.io/projected/fceffe76-69ff-4267-b556-b35e474d1178-kube-api-access-mpblh\") pod \"controller-manager-7c647f6468-9tn4b\" (UID: \"fceffe76-69ff-4267-b556-b35e474d1178\") " pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.266587 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.273199 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.723791 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb"] Feb 04 11:31:28 crc kubenswrapper[4728]: W0204 11:31:28.730502 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod977577fb_4fd4_4dd9_863a_69d13ee523e5.slice/crio-9c74d7c67c504dc0430bf7d70c781eeb0e8166056c9cc82b9631a4122f097cff WatchSource:0}: Error finding container 9c74d7c67c504dc0430bf7d70c781eeb0e8166056c9cc82b9631a4122f097cff: Status 404 returned error can't find the container with id 9c74d7c67c504dc0430bf7d70c781eeb0e8166056c9cc82b9631a4122f097cff Feb 04 11:31:28 crc kubenswrapper[4728]: I0204 11:31:28.767603 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c647f6468-9tn4b"] Feb 04 11:31:28 crc kubenswrapper[4728]: W0204 11:31:28.779536 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfceffe76_69ff_4267_b556_b35e474d1178.slice/crio-3bd4a77dd5f10efae7c773ccca0a9776d083c7eb0a4619811b53cac89ed7db4c WatchSource:0}: Error finding container 3bd4a77dd5f10efae7c773ccca0a9776d083c7eb0a4619811b53cac89ed7db4c: Status 404 returned error can't find the container with id 3bd4a77dd5f10efae7c773ccca0a9776d083c7eb0a4619811b53cac89ed7db4c Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.549029 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" event={"ID":"977577fb-4fd4-4dd9-863a-69d13ee523e5","Type":"ContainerStarted","Data":"e034120cf3df3b2082f08277416b2066ac5309f7d00d45de8b5edb07721af1ab"} Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.549075 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" event={"ID":"977577fb-4fd4-4dd9-863a-69d13ee523e5","Type":"ContainerStarted","Data":"9c74d7c67c504dc0430bf7d70c781eeb0e8166056c9cc82b9631a4122f097cff"} Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.549245 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.550633 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" event={"ID":"fceffe76-69ff-4267-b556-b35e474d1178","Type":"ContainerStarted","Data":"adbb85d427032e2b8e141414fcc58dae67944083b2cbd4cf71f7f9e593cbde01"} Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.550665 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" event={"ID":"fceffe76-69ff-4267-b556-b35e474d1178","Type":"ContainerStarted","Data":"3bd4a77dd5f10efae7c773ccca0a9776d083c7eb0a4619811b53cac89ed7db4c"} Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.550963 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.560652 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="335910da-4350-46de-b33b-3169d304c7f9" path="/var/lib/kubelet/pods/335910da-4350-46de-b33b-3169d304c7f9/volumes" Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.562127 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4047fd-fd5a-4b5e-b64f-4afda6032c4c" path="/var/lib/kubelet/pods/3d4047fd-fd5a-4b5e-b64f-4afda6032c4c/volumes" Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.563581 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87039d42-443e-40f7-abe1-a6462556cc32" path="/var/lib/kubelet/pods/87039d42-443e-40f7-abe1-a6462556cc32/volumes" Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.564807 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.564883 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.568922 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d545fbdfb-q95sb" podStartSLOduration=3.568897699 podStartE2EDuration="3.568897699s" podCreationTimestamp="2026-02-04 11:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:31:29.567170836 +0000 UTC m=+238.709875261" watchObservedRunningTime="2026-02-04 11:31:29.568897699 +0000 UTC m=+238.711602084" Feb 04 11:31:29 crc kubenswrapper[4728]: I0204 11:31:29.582956 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c647f6468-9tn4b" podStartSLOduration=4.582936382 podStartE2EDuration="4.582936382s" podCreationTimestamp="2026-02-04 11:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:31:29.581218499 +0000 UTC m=+238.723922914" watchObservedRunningTime="2026-02-04 11:31:29.582936382 +0000 UTC m=+238.725640777" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.950156 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8"] Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.951198 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.955555 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.955791 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.955857 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.955901 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.958906 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.959032 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.959187 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.959689 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.959782 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.959778 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.960037 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.960788 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.973359 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8"] Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.974575 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.975916 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 04 11:31:32 crc kubenswrapper[4728]: I0204 11:31:32.981347 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.094454 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.094564 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-template-error\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.094607 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.094691 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.094788 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n86qw\" (UniqueName: \"kubernetes.io/projected/da0126df-6ce9-4b82-a784-cbfcd759b4f0-kube-api-access-n86qw\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.094830 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.094946 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-session\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.094982 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.095007 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.095094 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-template-login\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.095144 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/da0126df-6ce9-4b82-a784-cbfcd759b4f0-audit-dir\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.095590 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-audit-policies\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.095637 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.095705 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.196651 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.196739 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-template-error\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.196826 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.196932 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.196989 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n86qw\" (UniqueName: \"kubernetes.io/projected/da0126df-6ce9-4b82-a784-cbfcd759b4f0-kube-api-access-n86qw\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.197036 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.197136 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-session\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.197192 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.197238 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.197288 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-template-login\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.197342 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/da0126df-6ce9-4b82-a784-cbfcd759b4f0-audit-dir\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.197410 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-audit-policies\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.197474 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.197546 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.197583 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/da0126df-6ce9-4b82-a784-cbfcd759b4f0-audit-dir\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.198511 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.199373 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-audit-policies\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.199387 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.200722 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.203051 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.204355 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.204968 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-template-error\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.206024 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.206331 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-user-template-login\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.210084 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-session\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.210801 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.211003 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/da0126df-6ce9-4b82-a784-cbfcd759b4f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.217580 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n86qw\" (UniqueName: \"kubernetes.io/projected/da0126df-6ce9-4b82-a784-cbfcd759b4f0-kube-api-access-n86qw\") pod \"oauth-openshift-7f8484fbcc-gjfq8\" (UID: \"da0126df-6ce9-4b82-a784-cbfcd759b4f0\") " pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.269199 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:33 crc kubenswrapper[4728]: I0204 11:31:33.702671 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8"] Feb 04 11:31:33 crc kubenswrapper[4728]: W0204 11:31:33.709014 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda0126df_6ce9_4b82_a784_cbfcd759b4f0.slice/crio-acbb8726e32896bc4d27428ed70d3cb3a812965526b51e9fd241024c644c3f8d WatchSource:0}: Error finding container acbb8726e32896bc4d27428ed70d3cb3a812965526b51e9fd241024c644c3f8d: Status 404 returned error can't find the container with id acbb8726e32896bc4d27428ed70d3cb3a812965526b51e9fd241024c644c3f8d Feb 04 11:31:34 crc kubenswrapper[4728]: I0204 11:31:34.576622 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" event={"ID":"da0126df-6ce9-4b82-a784-cbfcd759b4f0","Type":"ContainerStarted","Data":"06a06b46568e02935536b48ab9bd6fc77f43f30898c37fcb91350d971ab2a25d"} Feb 04 11:31:34 crc kubenswrapper[4728]: I0204 11:31:34.576910 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" event={"ID":"da0126df-6ce9-4b82-a784-cbfcd759b4f0","Type":"ContainerStarted","Data":"acbb8726e32896bc4d27428ed70d3cb3a812965526b51e9fd241024c644c3f8d"} Feb 04 11:31:34 crc kubenswrapper[4728]: I0204 11:31:34.577035 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:34 crc kubenswrapper[4728]: I0204 11:31:34.586844 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" Feb 04 11:31:34 crc kubenswrapper[4728]: I0204 11:31:34.604497 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f8484fbcc-gjfq8" podStartSLOduration=33.604471923 podStartE2EDuration="33.604471923s" podCreationTimestamp="2026-02-04 11:31:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:31:34.601006095 +0000 UTC m=+243.743710480" watchObservedRunningTime="2026-02-04 11:31:34.604471923 +0000 UTC m=+243.747176338" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.378825 4728 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.379565 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.379814 4728 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.380255 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f" gracePeriod=15 Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.380376 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34" gracePeriod=15 Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.380288 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822" gracePeriod=15 Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.380428 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180" gracePeriod=15 Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.380504 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770" gracePeriod=15 Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381191 4728 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 04 11:31:35 crc kubenswrapper[4728]: E0204 11:31:35.381495 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381515 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 04 11:31:35 crc kubenswrapper[4728]: E0204 11:31:35.381527 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381536 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 04 11:31:35 crc kubenswrapper[4728]: E0204 11:31:35.381553 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381586 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 11:31:35 crc kubenswrapper[4728]: E0204 11:31:35.381612 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381621 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 04 11:31:35 crc kubenswrapper[4728]: E0204 11:31:35.381635 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381645 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 04 11:31:35 crc kubenswrapper[4728]: E0204 11:31:35.381658 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381666 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 11:31:35 crc kubenswrapper[4728]: E0204 11:31:35.381680 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381691 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381889 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381917 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381935 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381947 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381957 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381970 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.381987 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 11:31:35 crc kubenswrapper[4728]: E0204 11:31:35.382154 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.382170 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.418925 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.525046 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.525118 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.525149 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.525231 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.525289 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.525363 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.525397 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.525418 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.585809 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.587420 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.588364 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822" exitCode=0 Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.588411 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770" exitCode=0 Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.588423 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34" exitCode=0 Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.588431 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180" exitCode=2 Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.589564 4728 scope.go:117] "RemoveContainer" containerID="670cf54716fb4bf97044aaf9cfa541b8670807eb8918e7f577da0761184ebd14" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.626919 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.626986 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627006 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627025 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627031 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627085 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627087 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627049 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627125 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627146 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627208 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627240 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627302 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627346 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627423 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.627606 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: I0204 11:31:35.721478 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:31:35 crc kubenswrapper[4728]: E0204 11:31:35.753957 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.128:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189107cab6167d77 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-04 11:31:35.753031031 +0000 UTC m=+244.895735426,LastTimestamp:2026-02-04 11:31:35.753031031 +0000 UTC m=+244.895735426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 04 11:31:36 crc kubenswrapper[4728]: I0204 11:31:36.596497 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4467d55af7f9caaf8445a6433b51bb0cb8823554086962adf9607da7115dbdd6"} Feb 04 11:31:36 crc kubenswrapper[4728]: I0204 11:31:36.597284 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1b153bf1267b61230197957defded224866f1ff823328321613aa9055b932ef6"} Feb 04 11:31:36 crc kubenswrapper[4728]: I0204 11:31:36.598879 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:36 crc kubenswrapper[4728]: I0204 11:31:36.600965 4728 generic.go:334] "Generic (PLEG): container finished" podID="84324b99-575b-4d1e-963a-4ce98447b52b" containerID="6ebedb15b6f29276904a9653732b9ca52d959836c780895805fc26b1b11019f9" exitCode=0 Feb 04 11:31:36 crc kubenswrapper[4728]: I0204 11:31:36.601049 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84324b99-575b-4d1e-963a-4ce98447b52b","Type":"ContainerDied","Data":"6ebedb15b6f29276904a9653732b9ca52d959836c780895805fc26b1b11019f9"} Feb 04 11:31:36 crc kubenswrapper[4728]: I0204 11:31:36.601407 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:36 crc kubenswrapper[4728]: I0204 11:31:36.601606 4728 status_manager.go:851] "Failed to get status for pod" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:36 crc kubenswrapper[4728]: I0204 11:31:36.603860 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 04 11:31:36 crc kubenswrapper[4728]: E0204 11:31:36.648245 4728 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.128:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" volumeName="registry-storage" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.755935 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.757207 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.757840 4728 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.758160 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.758610 4728 status_manager.go:851] "Failed to get status for pod" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.863648 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.863711 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.863731 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.863849 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.863878 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.863890 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.864079 4728 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.864095 4728 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.864105 4728 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.983297 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.984026 4728 status_manager.go:851] "Failed to get status for pod" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.984390 4728 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:37 crc kubenswrapper[4728]: I0204 11:31:37.985113 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.167609 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-kubelet-dir\") pod \"84324b99-575b-4d1e-963a-4ce98447b52b\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.167724 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84324b99-575b-4d1e-963a-4ce98447b52b-kube-api-access\") pod \"84324b99-575b-4d1e-963a-4ce98447b52b\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.167858 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-var-lock\") pod \"84324b99-575b-4d1e-963a-4ce98447b52b\" (UID: \"84324b99-575b-4d1e-963a-4ce98447b52b\") " Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.167835 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84324b99-575b-4d1e-963a-4ce98447b52b" (UID: "84324b99-575b-4d1e-963a-4ce98447b52b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.167955 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-var-lock" (OuterVolumeSpecName: "var-lock") pod "84324b99-575b-4d1e-963a-4ce98447b52b" (UID: "84324b99-575b-4d1e-963a-4ce98447b52b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.168380 4728 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.168431 4728 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84324b99-575b-4d1e-963a-4ce98447b52b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.173920 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84324b99-575b-4d1e-963a-4ce98447b52b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84324b99-575b-4d1e-963a-4ce98447b52b" (UID: "84324b99-575b-4d1e-963a-4ce98447b52b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.271524 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84324b99-575b-4d1e-963a-4ce98447b52b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 04 11:31:38 crc kubenswrapper[4728]: E0204 11:31:38.462510 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.128:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189107cab6167d77 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-04 11:31:35.753031031 +0000 UTC m=+244.895735426,LastTimestamp:2026-02-04 11:31:35.753031031 +0000 UTC m=+244.895735426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.615583 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84324b99-575b-4d1e-963a-4ce98447b52b","Type":"ContainerDied","Data":"b0c5aa1899265e7189527a487a9ea770f1689c77616e132a5a71ea640cca61c3"} Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.615639 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0c5aa1899265e7189527a487a9ea770f1689c77616e132a5a71ea640cca61c3" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.615694 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.618630 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.619974 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f" exitCode=0 Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.620017 4728 scope.go:117] "RemoveContainer" containerID="6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.620087 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.638984 4728 scope.go:117] "RemoveContainer" containerID="11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.642343 4728 status_manager.go:851] "Failed to get status for pod" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.642991 4728 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.643235 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.643783 4728 status_manager.go:851] "Failed to get status for pod" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.644076 4728 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.644541 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.656967 4728 scope.go:117] "RemoveContainer" containerID="36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.671625 4728 scope.go:117] "RemoveContainer" containerID="66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.687138 4728 scope.go:117] "RemoveContainer" containerID="1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.704576 4728 scope.go:117] "RemoveContainer" containerID="7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.725909 4728 scope.go:117] "RemoveContainer" containerID="6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822" Feb 04 11:31:38 crc kubenswrapper[4728]: E0204 11:31:38.726325 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\": container with ID starting with 6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822 not found: ID does not exist" containerID="6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.726356 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822"} err="failed to get container status \"6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\": rpc error: code = NotFound desc = could not find container \"6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822\": container with ID starting with 6c21bdc9a17d2292462a7ed104de2d0ef63e51165a619a1f96cf129a84bef822 not found: ID does not exist" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.726380 4728 scope.go:117] "RemoveContainer" containerID="11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770" Feb 04 11:31:38 crc kubenswrapper[4728]: E0204 11:31:38.726980 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\": container with ID starting with 11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770 not found: ID does not exist" containerID="11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.727005 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770"} err="failed to get container status \"11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\": rpc error: code = NotFound desc = could not find container \"11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770\": container with ID starting with 11b1410ac53ecfd3015841e48dd5b055367e94dc5f9947d4714b5e4426533770 not found: ID does not exist" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.727020 4728 scope.go:117] "RemoveContainer" containerID="36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34" Feb 04 11:31:38 crc kubenswrapper[4728]: E0204 11:31:38.727354 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\": container with ID starting with 36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34 not found: ID does not exist" containerID="36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.727371 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34"} err="failed to get container status \"36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\": rpc error: code = NotFound desc = could not find container \"36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34\": container with ID starting with 36cf923b30713a818ebd303149952d0fa17750f63fe3ace0a46ed4b6a025ed34 not found: ID does not exist" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.727385 4728 scope.go:117] "RemoveContainer" containerID="66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180" Feb 04 11:31:38 crc kubenswrapper[4728]: E0204 11:31:38.727815 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\": container with ID starting with 66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180 not found: ID does not exist" containerID="66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.727842 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180"} err="failed to get container status \"66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\": rpc error: code = NotFound desc = could not find container \"66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180\": container with ID starting with 66d96f6346eefe283e44c873639efc1cbae1e4436302d58f2d71fc988a8eb180 not found: ID does not exist" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.727858 4728 scope.go:117] "RemoveContainer" containerID="1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f" Feb 04 11:31:38 crc kubenswrapper[4728]: E0204 11:31:38.728120 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\": container with ID starting with 1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f not found: ID does not exist" containerID="1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.728141 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f"} err="failed to get container status \"1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\": rpc error: code = NotFound desc = could not find container \"1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f\": container with ID starting with 1760f1cc867cba3b2c271a146ff17c5c60d37fe8575e26fb19d8d0cdd8b2971f not found: ID does not exist" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.728187 4728 scope.go:117] "RemoveContainer" containerID="7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec" Feb 04 11:31:38 crc kubenswrapper[4728]: E0204 11:31:38.728431 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\": container with ID starting with 7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec not found: ID does not exist" containerID="7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec" Feb 04 11:31:38 crc kubenswrapper[4728]: I0204 11:31:38.728450 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec"} err="failed to get container status \"7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\": rpc error: code = NotFound desc = could not find container \"7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec\": container with ID starting with 7203db24b6b39467f54c4268b8de22041ade337b56f34d89c9156d50dc0b87ec not found: ID does not exist" Feb 04 11:31:39 crc kubenswrapper[4728]: I0204 11:31:39.560010 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 04 11:31:41 crc kubenswrapper[4728]: I0204 11:31:41.555471 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:41 crc kubenswrapper[4728]: I0204 11:31:41.557106 4728 status_manager.go:851] "Failed to get status for pod" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:41 crc kubenswrapper[4728]: E0204 11:31:41.634108 4728 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:41 crc kubenswrapper[4728]: E0204 11:31:41.634536 4728 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:41 crc kubenswrapper[4728]: E0204 11:31:41.635044 4728 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:41 crc kubenswrapper[4728]: E0204 11:31:41.635465 4728 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:41 crc kubenswrapper[4728]: E0204 11:31:41.635891 4728 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:41 crc kubenswrapper[4728]: I0204 11:31:41.635921 4728 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 04 11:31:41 crc kubenswrapper[4728]: E0204 11:31:41.636097 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="200ms" Feb 04 11:31:41 crc kubenswrapper[4728]: E0204 11:31:41.837495 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="400ms" Feb 04 11:31:42 crc kubenswrapper[4728]: E0204 11:31:42.239146 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="800ms" Feb 04 11:31:43 crc kubenswrapper[4728]: E0204 11:31:43.040330 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="1.6s" Feb 04 11:31:44 crc kubenswrapper[4728]: E0204 11:31:44.641340 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="3.2s" Feb 04 11:31:47 crc kubenswrapper[4728]: E0204 11:31:47.843060 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="6.4s" Feb 04 11:31:48 crc kubenswrapper[4728]: E0204 11:31:48.463914 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.128:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189107cab6167d77 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-04 11:31:35.753031031 +0000 UTC m=+244.895735426,LastTimestamp:2026-02-04 11:31:35.753031031 +0000 UTC m=+244.895735426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 04 11:31:48 crc kubenswrapper[4728]: I0204 11:31:48.681320 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 04 11:31:48 crc kubenswrapper[4728]: I0204 11:31:48.681371 4728 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594" exitCode=1 Feb 04 11:31:48 crc kubenswrapper[4728]: I0204 11:31:48.681404 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594"} Feb 04 11:31:48 crc kubenswrapper[4728]: I0204 11:31:48.681899 4728 scope.go:117] "RemoveContainer" containerID="b8fadeea277e5f66ec03c5fcbe49958011370a33b8bbade6105d10223f3e2594" Feb 04 11:31:48 crc kubenswrapper[4728]: I0204 11:31:48.683000 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:48 crc kubenswrapper[4728]: I0204 11:31:48.683627 4728 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:48 crc kubenswrapper[4728]: I0204 11:31:48.684045 4728 status_manager.go:851] "Failed to get status for pod" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:49 crc kubenswrapper[4728]: I0204 11:31:49.596815 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:31:49 crc kubenswrapper[4728]: I0204 11:31:49.693305 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 04 11:31:49 crc kubenswrapper[4728]: I0204 11:31:49.693405 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cd2a3d55376218660177261c5f5a0e48ffe2385710ebf4af28275440bf56fc3b"} Feb 04 11:31:49 crc kubenswrapper[4728]: I0204 11:31:49.694829 4728 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:49 crc kubenswrapper[4728]: I0204 11:31:49.695532 4728 status_manager.go:851] "Failed to get status for pod" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:49 crc kubenswrapper[4728]: I0204 11:31:49.696192 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:50 crc kubenswrapper[4728]: I0204 11:31:50.553555 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:50 crc kubenswrapper[4728]: I0204 11:31:50.554944 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:50 crc kubenswrapper[4728]: I0204 11:31:50.555528 4728 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:50 crc kubenswrapper[4728]: I0204 11:31:50.556302 4728 status_manager.go:851] "Failed to get status for pod" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:50 crc kubenswrapper[4728]: I0204 11:31:50.568568 4728 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3" Feb 04 11:31:50 crc kubenswrapper[4728]: I0204 11:31:50.568620 4728 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3" Feb 04 11:31:50 crc kubenswrapper[4728]: E0204 11:31:50.569266 4728 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:50 crc kubenswrapper[4728]: I0204 11:31:50.570158 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:50 crc kubenswrapper[4728]: I0204 11:31:50.700906 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"902a60a9fdfba03b175b9661f0c9410e9faed8fe85a5b885dc4495ebeb566f30"} Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.560526 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.561606 4728 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.562215 4728 status_manager.go:851] "Failed to get status for pod" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.562522 4728 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.710052 4728 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="0faa0abcd5bf7fd2ac5a7346376a3c952c41d4396a5172e68094a52a7941d723" exitCode=0 Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.710126 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"0faa0abcd5bf7fd2ac5a7346376a3c952c41d4396a5172e68094a52a7941d723"} Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.710312 4728 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3" Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.710344 4728 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3" Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.710825 4728 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:51 crc kubenswrapper[4728]: E0204 11:31:51.711015 4728 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.711103 4728 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.711367 4728 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:51 crc kubenswrapper[4728]: I0204 11:31:51.711637 4728 status_manager.go:851] "Failed to get status for pod" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Feb 04 11:31:52 crc kubenswrapper[4728]: I0204 11:31:52.720443 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3aa89f2c83ff7a2be98fffa6fcc3855634f0e93cb35c00204c86fec69497f4c2"} Feb 04 11:31:52 crc kubenswrapper[4728]: I0204 11:31:52.720771 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2a7350fd748a421670a2707731ee857e3051c611d7afba80f531674414b50671"} Feb 04 11:31:52 crc kubenswrapper[4728]: I0204 11:31:52.720790 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d90d21eaa752b0cd270f04fb2a54d319913643f8e7e8972b9c9ad30eab363a84"} Feb 04 11:31:52 crc kubenswrapper[4728]: I0204 11:31:52.720807 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0a9fcaf3227c50b3b8045139e46413b32a6e6c1db6177cda667d640ca7bf0a1b"} Feb 04 11:31:53 crc kubenswrapper[4728]: I0204 11:31:53.727232 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7a5b477edd4a2f513a247b3551f1ecd91b3547ea945e76a6f0c07b2079119932"} Feb 04 11:31:53 crc kubenswrapper[4728]: I0204 11:31:53.727475 4728 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3" Feb 04 11:31:53 crc kubenswrapper[4728]: I0204 11:31:53.727489 4728 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3" Feb 04 11:31:53 crc kubenswrapper[4728]: I0204 11:31:53.727642 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:55 crc kubenswrapper[4728]: I0204 11:31:55.570361 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:55 crc kubenswrapper[4728]: I0204 11:31:55.571570 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:55 crc kubenswrapper[4728]: I0204 11:31:55.577158 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:58 crc kubenswrapper[4728]: I0204 11:31:58.290816 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:31:58 crc kubenswrapper[4728]: I0204 11:31:58.294332 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:31:58 crc kubenswrapper[4728]: I0204 11:31:58.737180 4728 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:31:58 crc kubenswrapper[4728]: I0204 11:31:58.753665 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:31:58 crc kubenswrapper[4728]: I0204 11:31:58.758316 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 04 11:31:58 crc kubenswrapper[4728]: I0204 11:31:58.758579 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="058ab202-4617-40b3-87c8-b90ea73c8da7" Feb 04 11:31:59 crc kubenswrapper[4728]: I0204 11:31:59.759451 4728 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3" Feb 04 11:31:59 crc kubenswrapper[4728]: I0204 11:31:59.759516 4728 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc10fffa-3df0-497b-a8ec-7cbb2d0b1ee3" Feb 04 11:31:59 crc kubenswrapper[4728]: I0204 11:31:59.764195 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="058ab202-4617-40b3-87c8-b90ea73c8da7" Feb 04 11:32:08 crc kubenswrapper[4728]: I0204 11:32:08.843836 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 04 11:32:08 crc kubenswrapper[4728]: I0204 11:32:08.983466 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 04 11:32:09 crc kubenswrapper[4728]: I0204 11:32:09.505928 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 04 11:32:09 crc kubenswrapper[4728]: I0204 11:32:09.598889 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 04 11:32:09 crc kubenswrapper[4728]: I0204 11:32:09.638352 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 04 11:32:09 crc kubenswrapper[4728]: I0204 11:32:09.667621 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 04 11:32:09 crc kubenswrapper[4728]: I0204 11:32:09.737079 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 04 11:32:10 crc kubenswrapper[4728]: I0204 11:32:10.370649 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 04 11:32:10 crc kubenswrapper[4728]: I0204 11:32:10.419872 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 04 11:32:10 crc kubenswrapper[4728]: I0204 11:32:10.806845 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 04 11:32:11 crc kubenswrapper[4728]: I0204 11:32:11.148929 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 04 11:32:11 crc kubenswrapper[4728]: I0204 11:32:11.609653 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 04 11:32:11 crc kubenswrapper[4728]: I0204 11:32:11.742767 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 04 11:32:11 crc kubenswrapper[4728]: I0204 11:32:11.751520 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 04 11:32:11 crc kubenswrapper[4728]: I0204 11:32:11.892152 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.048051 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.082453 4728 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.231195 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.247909 4728 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.376019 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.396449 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.468104 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.547824 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.597521 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.630678 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.725287 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.783790 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.805744 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.809839 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.835651 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.838559 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.878485 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.901056 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.920583 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.963300 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.977214 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 04 11:32:12 crc kubenswrapper[4728]: I0204 11:32:12.990630 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.040667 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.051682 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.089736 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.089876 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.160695 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.176088 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.320185 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.424024 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.449713 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.451692 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.501068 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.549436 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.620658 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.643980 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.751389 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.756551 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.805551 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.808984 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.827450 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.884409 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.897096 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.918519 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.948415 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 04 11:32:13 crc kubenswrapper[4728]: I0204 11:32:13.982828 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.049079 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.074090 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.082513 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.098075 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.100910 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.119321 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.241245 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.253891 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.368168 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.401519 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.445242 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.492578 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.531154 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.535888 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.545549 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.546476 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.572942 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.578426 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.580437 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.587895 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.627439 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.637013 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.696789 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.703471 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.794238 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.811143 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.818279 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.875578 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.877273 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 04 11:32:14 crc kubenswrapper[4728]: I0204 11:32:14.965746 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.033178 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.033331 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.035816 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.067232 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.285680 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.305403 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.526051 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.644157 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.707252 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.837456 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.837592 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.872022 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.883792 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 04 11:32:15 crc kubenswrapper[4728]: I0204 11:32:15.886039 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.016072 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.037253 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.120533 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.151032 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.162911 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.258442 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.332234 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.336425 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.370170 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.406465 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.407016 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.480421 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.531602 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.540980 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.608431 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.675914 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.857029 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.950489 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 04 11:32:16 crc kubenswrapper[4728]: I0204 11:32:16.989739 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.030258 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.194523 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.225822 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.309575 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.377066 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.516136 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.555254 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.757028 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.775640 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.813843 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.825981 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.826769 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 04 11:32:17 crc kubenswrapper[4728]: I0204 11:32:17.961710 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.032667 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.039735 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.081561 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.094498 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.098003 4728 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.120935 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.178111 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.178167 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.234848 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.287658 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.350574 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.384026 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.480658 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.489491 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.673773 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.674927 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.691538 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.693462 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.860576 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.897584 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.930829 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.945848 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 04 11:32:18 crc kubenswrapper[4728]: I0204 11:32:18.978124 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.033491 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.070543 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.092732 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.102315 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.118436 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.155934 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.308439 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.332651 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.437702 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.672609 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.708450 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.806281 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.809126 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.813341 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.867558 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.870193 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.965709 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 04 11:32:19 crc kubenswrapper[4728]: I0204 11:32:19.977057 4728 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.038205 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.103133 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.143582 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.166712 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.211989 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.367813 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.387301 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.445479 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.480739 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.525872 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.539545 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.653338 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.674375 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.678022 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.686995 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.740558 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.852938 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.930271 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.978699 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 04 11:32:20 crc kubenswrapper[4728]: I0204 11:32:20.991713 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.002015 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.066221 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.083278 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.087816 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.298918 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.335457 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.438954 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.475363 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.556466 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.580727 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.593293 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.651769 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.657584 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.674390 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.688146 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 04 11:32:21 crc kubenswrapper[4728]: I0204 11:32:21.869952 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.034344 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.055060 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.073617 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.166590 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.186305 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.203063 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.399644 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.496575 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.574924 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.617402 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.619177 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.694336 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.812846 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.910443 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 04 11:32:22 crc kubenswrapper[4728]: I0204 11:32:22.987547 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.006654 4728 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.008963 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=48.008950332 podStartE2EDuration="48.008950332s" podCreationTimestamp="2026-02-04 11:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:31:58.776812038 +0000 UTC m=+267.919516423" watchObservedRunningTime="2026-02-04 11:32:23.008950332 +0000 UTC m=+292.151654717" Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.010176 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.010218 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.015426 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.020985 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.041427 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.041406563 podStartE2EDuration="25.041406563s" podCreationTimestamp="2026-02-04 11:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:32:23.038607613 +0000 UTC m=+292.181312018" watchObservedRunningTime="2026-02-04 11:32:23.041406563 +0000 UTC m=+292.184110968" Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.124714 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.152400 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.210138 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.471356 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.607369 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.640155 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 04 11:32:23 crc kubenswrapper[4728]: I0204 11:32:23.909567 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 04 11:32:24 crc kubenswrapper[4728]: I0204 11:32:24.643604 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 04 11:32:24 crc kubenswrapper[4728]: I0204 11:32:24.841656 4728 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 04 11:32:25 crc kubenswrapper[4728]: I0204 11:32:25.031975 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 04 11:32:25 crc kubenswrapper[4728]: I0204 11:32:25.503079 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 04 11:32:31 crc kubenswrapper[4728]: I0204 11:32:31.333135 4728 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 04 11:32:32 crc kubenswrapper[4728]: I0204 11:32:32.757429 4728 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 04 11:32:32 crc kubenswrapper[4728]: I0204 11:32:32.757925 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4467d55af7f9caaf8445a6433b51bb0cb8823554086962adf9607da7115dbdd6" gracePeriod=5 Feb 04 11:32:36 crc kubenswrapper[4728]: I0204 11:32:36.770565 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 04 11:32:37 crc kubenswrapper[4728]: I0204 11:32:37.983303 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 04 11:32:37 crc kubenswrapper[4728]: I0204 11:32:37.983633 4728 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4467d55af7f9caaf8445a6433b51bb0cb8823554086962adf9607da7115dbdd6" exitCode=137 Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.322482 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.322545 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.461414 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.483586 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.483633 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.483655 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.483745 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.483822 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.483811 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.483850 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.483824 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.483961 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.484100 4728 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.484112 4728 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.484122 4728 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.484131 4728 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.491480 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.585249 4728 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.991474 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.991537 4728 scope.go:117] "RemoveContainer" containerID="4467d55af7f9caaf8445a6433b51bb0cb8823554086962adf9607da7115dbdd6" Feb 04 11:32:38 crc kubenswrapper[4728]: I0204 11:32:38.991654 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 04 11:32:39 crc kubenswrapper[4728]: I0204 11:32:39.560481 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 04 11:32:39 crc kubenswrapper[4728]: I0204 11:32:39.560993 4728 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 04 11:32:39 crc kubenswrapper[4728]: I0204 11:32:39.570452 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 04 11:32:39 crc kubenswrapper[4728]: I0204 11:32:39.570486 4728 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2a96bb4a-b44c-4e97-b929-6c6ac5fe1210" Feb 04 11:32:39 crc kubenswrapper[4728]: I0204 11:32:39.573680 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 04 11:32:39 crc kubenswrapper[4728]: I0204 11:32:39.573712 4728 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2a96bb4a-b44c-4e97-b929-6c6ac5fe1210" Feb 04 11:32:40 crc kubenswrapper[4728]: I0204 11:32:40.481092 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 04 11:32:41 crc kubenswrapper[4728]: I0204 11:32:41.004217 4728 generic.go:334] "Generic (PLEG): container finished" podID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerID="b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350" exitCode=0 Feb 04 11:32:41 crc kubenswrapper[4728]: I0204 11:32:41.004321 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" event={"ID":"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa","Type":"ContainerDied","Data":"b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350"} Feb 04 11:32:41 crc kubenswrapper[4728]: I0204 11:32:41.004942 4728 scope.go:117] "RemoveContainer" containerID="b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350" Feb 04 11:32:41 crc kubenswrapper[4728]: I0204 11:32:41.662441 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 04 11:32:42 crc kubenswrapper[4728]: I0204 11:32:42.010610 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" event={"ID":"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa","Type":"ContainerStarted","Data":"81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f"} Feb 04 11:32:42 crc kubenswrapper[4728]: I0204 11:32:42.011240 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:32:42 crc kubenswrapper[4728]: I0204 11:32:42.012810 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:32:48 crc kubenswrapper[4728]: I0204 11:32:48.019426 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 04 11:32:53 crc kubenswrapper[4728]: I0204 11:32:53.547798 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 04 11:32:59 crc kubenswrapper[4728]: I0204 11:32:59.419226 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 04 11:32:59 crc kubenswrapper[4728]: I0204 11:32:59.563019 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 04 11:33:00 crc kubenswrapper[4728]: I0204 11:33:00.202288 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 04 11:33:01 crc kubenswrapper[4728]: I0204 11:33:01.168928 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 04 11:33:07 crc kubenswrapper[4728]: I0204 11:33:07.213085 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 04 11:33:35 crc kubenswrapper[4728]: I0204 11:33:35.448285 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:33:35 crc kubenswrapper[4728]: I0204 11:33:35.448692 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:33:47 crc kubenswrapper[4728]: I0204 11:33:47.959304 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-scq96"] Feb 04 11:33:47 crc kubenswrapper[4728]: E0204 11:33:47.960244 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 04 11:33:47 crc kubenswrapper[4728]: I0204 11:33:47.960266 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 04 11:33:47 crc kubenswrapper[4728]: E0204 11:33:47.960289 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" containerName="installer" Feb 04 11:33:47 crc kubenswrapper[4728]: I0204 11:33:47.960335 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" containerName="installer" Feb 04 11:33:47 crc kubenswrapper[4728]: I0204 11:33:47.960497 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="84324b99-575b-4d1e-963a-4ce98447b52b" containerName="installer" Feb 04 11:33:47 crc kubenswrapper[4728]: I0204 11:33:47.960533 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 04 11:33:47 crc kubenswrapper[4728]: I0204 11:33:47.961157 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:47 crc kubenswrapper[4728]: I0204 11:33:47.972440 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-scq96"] Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.028856 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e2262bf-6cdd-4361-8494-476166eab752-trusted-ca\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.028901 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e2262bf-6cdd-4361-8494-476166eab752-bound-sa-token\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.028938 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dws6j\" (UniqueName: \"kubernetes.io/projected/1e2262bf-6cdd-4361-8494-476166eab752-kube-api-access-dws6j\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.028965 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e2262bf-6cdd-4361-8494-476166eab752-ca-trust-extracted\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.028983 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e2262bf-6cdd-4361-8494-476166eab752-registry-tls\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.029111 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.029185 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e2262bf-6cdd-4361-8494-476166eab752-registry-certificates\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.029234 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e2262bf-6cdd-4361-8494-476166eab752-installation-pull-secrets\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.055131 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.130566 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e2262bf-6cdd-4361-8494-476166eab752-registry-certificates\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.130617 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e2262bf-6cdd-4361-8494-476166eab752-installation-pull-secrets\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.130653 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e2262bf-6cdd-4361-8494-476166eab752-trusted-ca\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.130674 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e2262bf-6cdd-4361-8494-476166eab752-bound-sa-token\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.130701 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dws6j\" (UniqueName: \"kubernetes.io/projected/1e2262bf-6cdd-4361-8494-476166eab752-kube-api-access-dws6j\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.130727 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e2262bf-6cdd-4361-8494-476166eab752-ca-trust-extracted\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.130744 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e2262bf-6cdd-4361-8494-476166eab752-registry-tls\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.131292 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e2262bf-6cdd-4361-8494-476166eab752-ca-trust-extracted\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.132138 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e2262bf-6cdd-4361-8494-476166eab752-trusted-ca\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.132256 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e2262bf-6cdd-4361-8494-476166eab752-registry-certificates\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.136259 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e2262bf-6cdd-4361-8494-476166eab752-installation-pull-secrets\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.136266 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e2262bf-6cdd-4361-8494-476166eab752-registry-tls\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.150931 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e2262bf-6cdd-4361-8494-476166eab752-bound-sa-token\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.152449 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dws6j\" (UniqueName: \"kubernetes.io/projected/1e2262bf-6cdd-4361-8494-476166eab752-kube-api-access-dws6j\") pod \"image-registry-66df7c8f76-scq96\" (UID: \"1e2262bf-6cdd-4361-8494-476166eab752\") " pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.282429 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:48 crc kubenswrapper[4728]: I0204 11:33:48.465797 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-scq96"] Feb 04 11:33:49 crc kubenswrapper[4728]: I0204 11:33:49.384066 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-scq96" event={"ID":"1e2262bf-6cdd-4361-8494-476166eab752","Type":"ContainerStarted","Data":"c66f9ec0a0598059926397f67c8c8ed9ce848a9942be1b9aa7e07bb0660ddf5c"} Feb 04 11:33:49 crc kubenswrapper[4728]: I0204 11:33:49.384451 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:33:49 crc kubenswrapper[4728]: I0204 11:33:49.384475 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-scq96" event={"ID":"1e2262bf-6cdd-4361-8494-476166eab752","Type":"ContainerStarted","Data":"e16042bb85ba8bafbb29fe4cf12df998c4275c297ecfe93ea13a4791698b17cd"} Feb 04 11:33:49 crc kubenswrapper[4728]: I0204 11:33:49.405420 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-scq96" podStartSLOduration=2.40539994 podStartE2EDuration="2.40539994s" podCreationTimestamp="2026-02-04 11:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:33:49.402978378 +0000 UTC m=+378.545682783" watchObservedRunningTime="2026-02-04 11:33:49.40539994 +0000 UTC m=+378.548104335" Feb 04 11:34:05 crc kubenswrapper[4728]: I0204 11:34:05.448800 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:34:05 crc kubenswrapper[4728]: I0204 11:34:05.449385 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:34:08 crc kubenswrapper[4728]: I0204 11:34:08.287106 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-scq96" Feb 04 11:34:08 crc kubenswrapper[4728]: I0204 11:34:08.343235 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57d49"] Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.789812 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w94hf"] Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.790694 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w94hf" podUID="b13c4294-fd84-478b-b4a0-321a5d706499" containerName="registry-server" containerID="cri-o://0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2" gracePeriod=30 Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.803081 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sszrr"] Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.803293 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sszrr" podUID="68c43db7-d07e-45eb-bd58-6651d8a0e342" containerName="registry-server" containerID="cri-o://7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba" gracePeriod=30 Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.817942 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2v2s5"] Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.818409 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" podUID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerName="marketplace-operator" containerID="cri-o://81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f" gracePeriod=30 Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.829494 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7djd8"] Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.829816 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7djd8" podUID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" containerName="registry-server" containerID="cri-o://ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e" gracePeriod=30 Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.840009 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4r26n"] Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.840386 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4r26n" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerName="registry-server" containerID="cri-o://f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f" gracePeriod=30 Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.849880 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8zzsd"] Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.850728 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.861919 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8zzsd"] Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.969499 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88xwz\" (UniqueName: \"kubernetes.io/projected/cc15bd74-2783-4922-bb1b-9d4b38b5f3ed-kube-api-access-88xwz\") pod \"marketplace-operator-79b997595-8zzsd\" (UID: \"cc15bd74-2783-4922-bb1b-9d4b38b5f3ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.969791 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc15bd74-2783-4922-bb1b-9d4b38b5f3ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8zzsd\" (UID: \"cc15bd74-2783-4922-bb1b-9d4b38b5f3ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:15 crc kubenswrapper[4728]: I0204 11:34:15.969862 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc15bd74-2783-4922-bb1b-9d4b38b5f3ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8zzsd\" (UID: \"cc15bd74-2783-4922-bb1b-9d4b38b5f3ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.072831 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88xwz\" (UniqueName: \"kubernetes.io/projected/cc15bd74-2783-4922-bb1b-9d4b38b5f3ed-kube-api-access-88xwz\") pod \"marketplace-operator-79b997595-8zzsd\" (UID: \"cc15bd74-2783-4922-bb1b-9d4b38b5f3ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.073333 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc15bd74-2783-4922-bb1b-9d4b38b5f3ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8zzsd\" (UID: \"cc15bd74-2783-4922-bb1b-9d4b38b5f3ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.073809 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc15bd74-2783-4922-bb1b-9d4b38b5f3ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8zzsd\" (UID: \"cc15bd74-2783-4922-bb1b-9d4b38b5f3ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.075477 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc15bd74-2783-4922-bb1b-9d4b38b5f3ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8zzsd\" (UID: \"cc15bd74-2783-4922-bb1b-9d4b38b5f3ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.080546 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc15bd74-2783-4922-bb1b-9d4b38b5f3ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8zzsd\" (UID: \"cc15bd74-2783-4922-bb1b-9d4b38b5f3ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.092339 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88xwz\" (UniqueName: \"kubernetes.io/projected/cc15bd74-2783-4922-bb1b-9d4b38b5f3ed-kube-api-access-88xwz\") pod \"marketplace-operator-79b997595-8zzsd\" (UID: \"cc15bd74-2783-4922-bb1b-9d4b38b5f3ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.249962 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.255125 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.260452 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.275933 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csmts\" (UniqueName: \"kubernetes.io/projected/b13c4294-fd84-478b-b4a0-321a5d706499-kube-api-access-csmts\") pod \"b13c4294-fd84-478b-b4a0-321a5d706499\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.276041 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-catalog-content\") pod \"b13c4294-fd84-478b-b4a0-321a5d706499\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.276093 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hvqr\" (UniqueName: \"kubernetes.io/projected/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-kube-api-access-2hvqr\") pod \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.276134 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-trusted-ca\") pod \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.276187 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-utilities\") pod \"b13c4294-fd84-478b-b4a0-321a5d706499\" (UID: \"b13c4294-fd84-478b-b4a0-321a5d706499\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.276267 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-operator-metrics\") pod \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\" (UID: \"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.280229 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-utilities" (OuterVolumeSpecName: "utilities") pod "b13c4294-fd84-478b-b4a0-321a5d706499" (UID: "b13c4294-fd84-478b-b4a0-321a5d706499"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.283776 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" (UID: "2b4bc824-0bd6-4f1a-9f4b-67c844d24baa"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.285074 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" (UID: "2b4bc824-0bd6-4f1a-9f4b-67c844d24baa"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.295543 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-kube-api-access-2hvqr" (OuterVolumeSpecName: "kube-api-access-2hvqr") pod "2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" (UID: "2b4bc824-0bd6-4f1a-9f4b-67c844d24baa"). InnerVolumeSpecName "kube-api-access-2hvqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.302895 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13c4294-fd84-478b-b4a0-321a5d706499-kube-api-access-csmts" (OuterVolumeSpecName: "kube-api-access-csmts") pod "b13c4294-fd84-478b-b4a0-321a5d706499" (UID: "b13c4294-fd84-478b-b4a0-321a5d706499"). InnerVolumeSpecName "kube-api-access-csmts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.307980 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.308313 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.308354 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.368882 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b13c4294-fd84-478b-b4a0-321a5d706499" (UID: "b13c4294-fd84-478b-b4a0-321a5d706499"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379014 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-catalog-content\") pod \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379067 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-catalog-content\") pod \"68c43db7-d07e-45eb-bd58-6651d8a0e342\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379092 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-utilities\") pod \"81d54708-f68a-4e0b-b8e4-699a15e89f03\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379121 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zwln\" (UniqueName: \"kubernetes.io/projected/68c43db7-d07e-45eb-bd58-6651d8a0e342-kube-api-access-6zwln\") pod \"68c43db7-d07e-45eb-bd58-6651d8a0e342\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379150 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-utilities\") pod \"68c43db7-d07e-45eb-bd58-6651d8a0e342\" (UID: \"68c43db7-d07e-45eb-bd58-6651d8a0e342\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379220 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-catalog-content\") pod \"81d54708-f68a-4e0b-b8e4-699a15e89f03\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379244 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgk4h\" (UniqueName: \"kubernetes.io/projected/af9c8d19-58ae-479c-8c47-3ce89d9c803c-kube-api-access-hgk4h\") pod \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379263 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-utilities\") pod \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\" (UID: \"af9c8d19-58ae-479c-8c47-3ce89d9c803c\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379313 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zng5p\" (UniqueName: \"kubernetes.io/projected/81d54708-f68a-4e0b-b8e4-699a15e89f03-kube-api-access-zng5p\") pod \"81d54708-f68a-4e0b-b8e4-699a15e89f03\" (UID: \"81d54708-f68a-4e0b-b8e4-699a15e89f03\") " Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379578 4728 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379597 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csmts\" (UniqueName: \"kubernetes.io/projected/b13c4294-fd84-478b-b4a0-321a5d706499-kube-api-access-csmts\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379608 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379619 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hvqr\" (UniqueName: \"kubernetes.io/projected/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-kube-api-access-2hvqr\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379631 4728 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.379642 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c4294-fd84-478b-b4a0-321a5d706499-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.380611 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-utilities" (OuterVolumeSpecName: "utilities") pod "68c43db7-d07e-45eb-bd58-6651d8a0e342" (UID: "68c43db7-d07e-45eb-bd58-6651d8a0e342"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.381548 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-utilities" (OuterVolumeSpecName: "utilities") pod "af9c8d19-58ae-479c-8c47-3ce89d9c803c" (UID: "af9c8d19-58ae-479c-8c47-3ce89d9c803c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.381892 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-utilities" (OuterVolumeSpecName: "utilities") pod "81d54708-f68a-4e0b-b8e4-699a15e89f03" (UID: "81d54708-f68a-4e0b-b8e4-699a15e89f03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.398046 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c43db7-d07e-45eb-bd58-6651d8a0e342-kube-api-access-6zwln" (OuterVolumeSpecName: "kube-api-access-6zwln") pod "68c43db7-d07e-45eb-bd58-6651d8a0e342" (UID: "68c43db7-d07e-45eb-bd58-6651d8a0e342"). InnerVolumeSpecName "kube-api-access-6zwln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.402714 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af9c8d19-58ae-479c-8c47-3ce89d9c803c-kube-api-access-hgk4h" (OuterVolumeSpecName: "kube-api-access-hgk4h") pod "af9c8d19-58ae-479c-8c47-3ce89d9c803c" (UID: "af9c8d19-58ae-479c-8c47-3ce89d9c803c"). InnerVolumeSpecName "kube-api-access-hgk4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.409652 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d54708-f68a-4e0b-b8e4-699a15e89f03-kube-api-access-zng5p" (OuterVolumeSpecName: "kube-api-access-zng5p") pod "81d54708-f68a-4e0b-b8e4-699a15e89f03" (UID: "81d54708-f68a-4e0b-b8e4-699a15e89f03"). InnerVolumeSpecName "kube-api-access-zng5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.416000 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af9c8d19-58ae-479c-8c47-3ce89d9c803c" (UID: "af9c8d19-58ae-479c-8c47-3ce89d9c803c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.449772 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68c43db7-d07e-45eb-bd58-6651d8a0e342" (UID: "68c43db7-d07e-45eb-bd58-6651d8a0e342"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.474527 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8zzsd"] Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.480552 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zng5p\" (UniqueName: \"kubernetes.io/projected/81d54708-f68a-4e0b-b8e4-699a15e89f03-kube-api-access-zng5p\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.480575 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.480584 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.480592 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.480603 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zwln\" (UniqueName: \"kubernetes.io/projected/68c43db7-d07e-45eb-bd58-6651d8a0e342-kube-api-access-6zwln\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.480611 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c43db7-d07e-45eb-bd58-6651d8a0e342-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.480619 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgk4h\" (UniqueName: \"kubernetes.io/projected/af9c8d19-58ae-479c-8c47-3ce89d9c803c-kube-api-access-hgk4h\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.480627 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9c8d19-58ae-479c-8c47-3ce89d9c803c-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.531191 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81d54708-f68a-4e0b-b8e4-699a15e89f03" (UID: "81d54708-f68a-4e0b-b8e4-699a15e89f03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.544180 4728 generic.go:334] "Generic (PLEG): container finished" podID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerID="f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f" exitCode=0 Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.544290 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r26n" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.544272 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r26n" event={"ID":"81d54708-f68a-4e0b-b8e4-699a15e89f03","Type":"ContainerDied","Data":"f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f"} Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.544429 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r26n" event={"ID":"81d54708-f68a-4e0b-b8e4-699a15e89f03","Type":"ContainerDied","Data":"76e62adfbb8ee2c65f86e80d94f48938949f0d4aaffc51e93001521cf4089853"} Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.544451 4728 scope.go:117] "RemoveContainer" containerID="f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.549140 4728 generic.go:334] "Generic (PLEG): container finished" podID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" containerID="ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e" exitCode=0 Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.549171 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7djd8" event={"ID":"af9c8d19-58ae-479c-8c47-3ce89d9c803c","Type":"ContainerDied","Data":"ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e"} Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.549204 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7djd8" event={"ID":"af9c8d19-58ae-479c-8c47-3ce89d9c803c","Type":"ContainerDied","Data":"19c3cd9114029b19f07e3c70e3bc3fae0ad4c93322810f3f3d1dac293ef68ac3"} Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.549226 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7djd8" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.550500 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" event={"ID":"cc15bd74-2783-4922-bb1b-9d4b38b5f3ed","Type":"ContainerStarted","Data":"a93f90de3477132bc5756cc5b44c8e57748826c2606a391e4ac75ddc71ea3a60"} Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.552434 4728 generic.go:334] "Generic (PLEG): container finished" podID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerID="81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f" exitCode=0 Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.552503 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" event={"ID":"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa","Type":"ContainerDied","Data":"81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f"} Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.552518 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.552560 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2v2s5" event={"ID":"2b4bc824-0bd6-4f1a-9f4b-67c844d24baa","Type":"ContainerDied","Data":"ba41ea138a121c8f684829ed789c0688e5bfb6ca42b01bbbc183a35b92ee49c4"} Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.555310 4728 generic.go:334] "Generic (PLEG): container finished" podID="b13c4294-fd84-478b-b4a0-321a5d706499" containerID="0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2" exitCode=0 Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.555383 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w94hf" event={"ID":"b13c4294-fd84-478b-b4a0-321a5d706499","Type":"ContainerDied","Data":"0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2"} Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.555404 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w94hf" event={"ID":"b13c4294-fd84-478b-b4a0-321a5d706499","Type":"ContainerDied","Data":"5c4b9fd2740ade93272d7c58538afef4c58704d38cbdcb18179b49215707a553"} Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.555478 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w94hf" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.575816 4728 scope.go:117] "RemoveContainer" containerID="4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.575923 4728 generic.go:334] "Generic (PLEG): container finished" podID="68c43db7-d07e-45eb-bd58-6651d8a0e342" containerID="7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba" exitCode=0 Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.575961 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sszrr" event={"ID":"68c43db7-d07e-45eb-bd58-6651d8a0e342","Type":"ContainerDied","Data":"7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba"} Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.576022 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sszrr" event={"ID":"68c43db7-d07e-45eb-bd58-6651d8a0e342","Type":"ContainerDied","Data":"d0b9deab00cb5c03964b582b5401403f78f798b9fb7645456c2d1a4bc81e85b0"} Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.576109 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sszrr" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.581170 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d54708-f68a-4e0b-b8e4-699a15e89f03-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.600345 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4r26n"] Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.603108 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4r26n"] Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.611811 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7djd8"] Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.615801 4728 scope.go:117] "RemoveContainer" containerID="16c2e59082b027919f76ec8ecff9ad3d2035ff49ee1bdd70da464257cfe0665c" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.626528 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7djd8"] Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.636383 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w94hf"] Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.642359 4728 scope.go:117] "RemoveContainer" containerID="f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.642952 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w94hf"] Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.643266 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f\": container with ID starting with f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f not found: ID does not exist" containerID="f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.643300 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f"} err="failed to get container status \"f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f\": rpc error: code = NotFound desc = could not find container \"f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f\": container with ID starting with f82999c1a5d1f50c0a95f949d005d8eacf4206d7c28ed09c08a7e50d6ff02d4f not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.643329 4728 scope.go:117] "RemoveContainer" containerID="4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.643771 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348\": container with ID starting with 4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348 not found: ID does not exist" containerID="4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.643804 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348"} err="failed to get container status \"4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348\": rpc error: code = NotFound desc = could not find container \"4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348\": container with ID starting with 4ef9be95709165bd0463cfb1fe2ec85bf1babad59447f9dd1f996d85f28f3348 not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.643825 4728 scope.go:117] "RemoveContainer" containerID="16c2e59082b027919f76ec8ecff9ad3d2035ff49ee1bdd70da464257cfe0665c" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.644122 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c2e59082b027919f76ec8ecff9ad3d2035ff49ee1bdd70da464257cfe0665c\": container with ID starting with 16c2e59082b027919f76ec8ecff9ad3d2035ff49ee1bdd70da464257cfe0665c not found: ID does not exist" containerID="16c2e59082b027919f76ec8ecff9ad3d2035ff49ee1bdd70da464257cfe0665c" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.644143 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c2e59082b027919f76ec8ecff9ad3d2035ff49ee1bdd70da464257cfe0665c"} err="failed to get container status \"16c2e59082b027919f76ec8ecff9ad3d2035ff49ee1bdd70da464257cfe0665c\": rpc error: code = NotFound desc = could not find container \"16c2e59082b027919f76ec8ecff9ad3d2035ff49ee1bdd70da464257cfe0665c\": container with ID starting with 16c2e59082b027919f76ec8ecff9ad3d2035ff49ee1bdd70da464257cfe0665c not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.644156 4728 scope.go:117] "RemoveContainer" containerID="ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.646146 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2v2s5"] Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.648760 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2v2s5"] Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.651551 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sszrr"] Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.654785 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sszrr"] Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.659604 4728 scope.go:117] "RemoveContainer" containerID="ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.671712 4728 scope.go:117] "RemoveContainer" containerID="94c7011c0c984ecfe7ca1e4a7c89956b5fdf04cc7e69de4a9e241504c267ab64" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.683946 4728 scope.go:117] "RemoveContainer" containerID="ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.684371 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e\": container with ID starting with ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e not found: ID does not exist" containerID="ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.684461 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e"} err="failed to get container status \"ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e\": rpc error: code = NotFound desc = could not find container \"ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e\": container with ID starting with ac90a9b036705c91cacffef6dbab2c49cc772668afd223faabd0643ad7dd2f3e not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.684544 4728 scope.go:117] "RemoveContainer" containerID="ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.684921 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce\": container with ID starting with ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce not found: ID does not exist" containerID="ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.685019 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce"} err="failed to get container status \"ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce\": rpc error: code = NotFound desc = could not find container \"ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce\": container with ID starting with ac959b879c8a1bf313487d0c0fd98f34428bfc85a9b9c87de7d5c70f108141ce not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.685091 4728 scope.go:117] "RemoveContainer" containerID="94c7011c0c984ecfe7ca1e4a7c89956b5fdf04cc7e69de4a9e241504c267ab64" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.685529 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94c7011c0c984ecfe7ca1e4a7c89956b5fdf04cc7e69de4a9e241504c267ab64\": container with ID starting with 94c7011c0c984ecfe7ca1e4a7c89956b5fdf04cc7e69de4a9e241504c267ab64 not found: ID does not exist" containerID="94c7011c0c984ecfe7ca1e4a7c89956b5fdf04cc7e69de4a9e241504c267ab64" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.685569 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c7011c0c984ecfe7ca1e4a7c89956b5fdf04cc7e69de4a9e241504c267ab64"} err="failed to get container status \"94c7011c0c984ecfe7ca1e4a7c89956b5fdf04cc7e69de4a9e241504c267ab64\": rpc error: code = NotFound desc = could not find container \"94c7011c0c984ecfe7ca1e4a7c89956b5fdf04cc7e69de4a9e241504c267ab64\": container with ID starting with 94c7011c0c984ecfe7ca1e4a7c89956b5fdf04cc7e69de4a9e241504c267ab64 not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.685594 4728 scope.go:117] "RemoveContainer" containerID="81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.697188 4728 scope.go:117] "RemoveContainer" containerID="b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.712841 4728 scope.go:117] "RemoveContainer" containerID="81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.713318 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f\": container with ID starting with 81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f not found: ID does not exist" containerID="81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.713485 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f"} err="failed to get container status \"81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f\": rpc error: code = NotFound desc = could not find container \"81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f\": container with ID starting with 81ab37730a27d73dffef271a2ff96da16df5ca47deeef956c33e8490137aed0f not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.713609 4728 scope.go:117] "RemoveContainer" containerID="b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.714311 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350\": container with ID starting with b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350 not found: ID does not exist" containerID="b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.714345 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350"} err="failed to get container status \"b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350\": rpc error: code = NotFound desc = could not find container \"b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350\": container with ID starting with b632bc44f4c1e3a12c1eead1dc9875152df14631717bc2dc6bbbf0f90945d350 not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.714373 4728 scope.go:117] "RemoveContainer" containerID="0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.727705 4728 scope.go:117] "RemoveContainer" containerID="f14fc433c2b613974479b8d38955b06dc7fa88542c80a5a8d3d72bcdb556721e" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.748032 4728 scope.go:117] "RemoveContainer" containerID="43b17b32dfdc64b05ca602e73193c9c641bfb3fc25d1aad22ea5e4e96dc70b6b" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.761311 4728 scope.go:117] "RemoveContainer" containerID="0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.761675 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2\": container with ID starting with 0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2 not found: ID does not exist" containerID="0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.761705 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2"} err="failed to get container status \"0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2\": rpc error: code = NotFound desc = could not find container \"0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2\": container with ID starting with 0ddcec3099b3bbbaa3a5fe01eb37c50845a310b59eb30b2ea942e9b26b6701d2 not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.761733 4728 scope.go:117] "RemoveContainer" containerID="f14fc433c2b613974479b8d38955b06dc7fa88542c80a5a8d3d72bcdb556721e" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.762314 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14fc433c2b613974479b8d38955b06dc7fa88542c80a5a8d3d72bcdb556721e\": container with ID starting with f14fc433c2b613974479b8d38955b06dc7fa88542c80a5a8d3d72bcdb556721e not found: ID does not exist" containerID="f14fc433c2b613974479b8d38955b06dc7fa88542c80a5a8d3d72bcdb556721e" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.762431 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14fc433c2b613974479b8d38955b06dc7fa88542c80a5a8d3d72bcdb556721e"} err="failed to get container status \"f14fc433c2b613974479b8d38955b06dc7fa88542c80a5a8d3d72bcdb556721e\": rpc error: code = NotFound desc = could not find container \"f14fc433c2b613974479b8d38955b06dc7fa88542c80a5a8d3d72bcdb556721e\": container with ID starting with f14fc433c2b613974479b8d38955b06dc7fa88542c80a5a8d3d72bcdb556721e not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.762518 4728 scope.go:117] "RemoveContainer" containerID="43b17b32dfdc64b05ca602e73193c9c641bfb3fc25d1aad22ea5e4e96dc70b6b" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.762874 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b17b32dfdc64b05ca602e73193c9c641bfb3fc25d1aad22ea5e4e96dc70b6b\": container with ID starting with 43b17b32dfdc64b05ca602e73193c9c641bfb3fc25d1aad22ea5e4e96dc70b6b not found: ID does not exist" containerID="43b17b32dfdc64b05ca602e73193c9c641bfb3fc25d1aad22ea5e4e96dc70b6b" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.762899 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b17b32dfdc64b05ca602e73193c9c641bfb3fc25d1aad22ea5e4e96dc70b6b"} err="failed to get container status \"43b17b32dfdc64b05ca602e73193c9c641bfb3fc25d1aad22ea5e4e96dc70b6b\": rpc error: code = NotFound desc = could not find container \"43b17b32dfdc64b05ca602e73193c9c641bfb3fc25d1aad22ea5e4e96dc70b6b\": container with ID starting with 43b17b32dfdc64b05ca602e73193c9c641bfb3fc25d1aad22ea5e4e96dc70b6b not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.762913 4728 scope.go:117] "RemoveContainer" containerID="7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.774479 4728 scope.go:117] "RemoveContainer" containerID="e5a31f756d4fe49f9dbf0dd9185be0b1c8b50edf7fac91010bd0127e04183999" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.791025 4728 scope.go:117] "RemoveContainer" containerID="17dac3bc2951278f544dc44e35de35cd2ba72913746ac6be89658368e2631982" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.804687 4728 scope.go:117] "RemoveContainer" containerID="7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.805215 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba\": container with ID starting with 7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba not found: ID does not exist" containerID="7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.805285 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba"} err="failed to get container status \"7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba\": rpc error: code = NotFound desc = could not find container \"7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba\": container with ID starting with 7fdd941fd79f65a4b7e9fa414523ee6f1e1e22fc50447e30bf476e41de6621ba not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.805313 4728 scope.go:117] "RemoveContainer" containerID="e5a31f756d4fe49f9dbf0dd9185be0b1c8b50edf7fac91010bd0127e04183999" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.805527 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5a31f756d4fe49f9dbf0dd9185be0b1c8b50edf7fac91010bd0127e04183999\": container with ID starting with e5a31f756d4fe49f9dbf0dd9185be0b1c8b50edf7fac91010bd0127e04183999 not found: ID does not exist" containerID="e5a31f756d4fe49f9dbf0dd9185be0b1c8b50edf7fac91010bd0127e04183999" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.805555 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a31f756d4fe49f9dbf0dd9185be0b1c8b50edf7fac91010bd0127e04183999"} err="failed to get container status \"e5a31f756d4fe49f9dbf0dd9185be0b1c8b50edf7fac91010bd0127e04183999\": rpc error: code = NotFound desc = could not find container \"e5a31f756d4fe49f9dbf0dd9185be0b1c8b50edf7fac91010bd0127e04183999\": container with ID starting with e5a31f756d4fe49f9dbf0dd9185be0b1c8b50edf7fac91010bd0127e04183999 not found: ID does not exist" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.805572 4728 scope.go:117] "RemoveContainer" containerID="17dac3bc2951278f544dc44e35de35cd2ba72913746ac6be89658368e2631982" Feb 04 11:34:16 crc kubenswrapper[4728]: E0204 11:34:16.805795 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17dac3bc2951278f544dc44e35de35cd2ba72913746ac6be89658368e2631982\": container with ID starting with 17dac3bc2951278f544dc44e35de35cd2ba72913746ac6be89658368e2631982 not found: ID does not exist" containerID="17dac3bc2951278f544dc44e35de35cd2ba72913746ac6be89658368e2631982" Feb 04 11:34:16 crc kubenswrapper[4728]: I0204 11:34:16.805819 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17dac3bc2951278f544dc44e35de35cd2ba72913746ac6be89658368e2631982"} err="failed to get container status \"17dac3bc2951278f544dc44e35de35cd2ba72913746ac6be89658368e2631982\": rpc error: code = NotFound desc = could not find container \"17dac3bc2951278f544dc44e35de35cd2ba72913746ac6be89658368e2631982\": container with ID starting with 17dac3bc2951278f544dc44e35de35cd2ba72913746ac6be89658368e2631982 not found: ID does not exist" Feb 04 11:34:17 crc kubenswrapper[4728]: I0204 11:34:17.561109 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" path="/var/lib/kubelet/pods/2b4bc824-0bd6-4f1a-9f4b-67c844d24baa/volumes" Feb 04 11:34:17 crc kubenswrapper[4728]: I0204 11:34:17.562021 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c43db7-d07e-45eb-bd58-6651d8a0e342" path="/var/lib/kubelet/pods/68c43db7-d07e-45eb-bd58-6651d8a0e342/volumes" Feb 04 11:34:17 crc kubenswrapper[4728]: I0204 11:34:17.562723 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" path="/var/lib/kubelet/pods/81d54708-f68a-4e0b-b8e4-699a15e89f03/volumes" Feb 04 11:34:17 crc kubenswrapper[4728]: I0204 11:34:17.564158 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" path="/var/lib/kubelet/pods/af9c8d19-58ae-479c-8c47-3ce89d9c803c/volumes" Feb 04 11:34:17 crc kubenswrapper[4728]: I0204 11:34:17.564905 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b13c4294-fd84-478b-b4a0-321a5d706499" path="/var/lib/kubelet/pods/b13c4294-fd84-478b-b4a0-321a5d706499/volumes" Feb 04 11:34:17 crc kubenswrapper[4728]: I0204 11:34:17.584011 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" event={"ID":"cc15bd74-2783-4922-bb1b-9d4b38b5f3ed","Type":"ContainerStarted","Data":"f53bd9364e25b2f9ed7654ef09e767870be7376fc71110abc73f2b9180689bc7"} Feb 04 11:34:17 crc kubenswrapper[4728]: I0204 11:34:17.584236 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:17 crc kubenswrapper[4728]: I0204 11:34:17.587149 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" Feb 04 11:34:17 crc kubenswrapper[4728]: I0204 11:34:17.598499 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8zzsd" podStartSLOduration=2.598482087 podStartE2EDuration="2.598482087s" podCreationTimestamp="2026-02-04 11:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:34:17.597170033 +0000 UTC m=+406.739874418" watchObservedRunningTime="2026-02-04 11:34:17.598482087 +0000 UTC m=+406.741186472" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011477 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-smtjc"] Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011680 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerName="extract-content" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011692 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerName="extract-content" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011702 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c43db7-d07e-45eb-bd58-6651d8a0e342" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011707 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c43db7-d07e-45eb-bd58-6651d8a0e342" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011718 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerName="extract-utilities" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011724 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerName="extract-utilities" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011732 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13c4294-fd84-478b-b4a0-321a5d706499" containerName="extract-content" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011737 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13c4294-fd84-478b-b4a0-321a5d706499" containerName="extract-content" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011763 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerName="marketplace-operator" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011769 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerName="marketplace-operator" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011777 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" containerName="extract-utilities" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011783 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" containerName="extract-utilities" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011792 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c43db7-d07e-45eb-bd58-6651d8a0e342" containerName="extract-utilities" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011798 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c43db7-d07e-45eb-bd58-6651d8a0e342" containerName="extract-utilities" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011804 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011809 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011818 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13c4294-fd84-478b-b4a0-321a5d706499" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011824 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13c4294-fd84-478b-b4a0-321a5d706499" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011834 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" containerName="extract-content" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011839 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" containerName="extract-content" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011846 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerName="marketplace-operator" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011852 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerName="marketplace-operator" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011858 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13c4294-fd84-478b-b4a0-321a5d706499" containerName="extract-utilities" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011865 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13c4294-fd84-478b-b4a0-321a5d706499" containerName="extract-utilities" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011876 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c43db7-d07e-45eb-bd58-6651d8a0e342" containerName="extract-content" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011881 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c43db7-d07e-45eb-bd58-6651d8a0e342" containerName="extract-content" Feb 04 11:34:18 crc kubenswrapper[4728]: E0204 11:34:18.011887 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011892 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011969 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerName="marketplace-operator" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011981 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13c4294-fd84-478b-b4a0-321a5d706499" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011990 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c43db7-d07e-45eb-bd58-6651d8a0e342" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.011997 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d54708-f68a-4e0b-b8e4-699a15e89f03" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.012005 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="af9c8d19-58ae-479c-8c47-3ce89d9c803c" containerName="registry-server" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.012162 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4bc824-0bd6-4f1a-9f4b-67c844d24baa" containerName="marketplace-operator" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.012785 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.015248 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.021822 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-smtjc"] Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.096902 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17c89da-8862-4f3b-a801-2c56fabe069d-utilities\") pod \"redhat-marketplace-smtjc\" (UID: \"d17c89da-8862-4f3b-a801-2c56fabe069d\") " pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.096944 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17c89da-8862-4f3b-a801-2c56fabe069d-catalog-content\") pod \"redhat-marketplace-smtjc\" (UID: \"d17c89da-8862-4f3b-a801-2c56fabe069d\") " pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.096967 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z989c\" (UniqueName: \"kubernetes.io/projected/d17c89da-8862-4f3b-a801-2c56fabe069d-kube-api-access-z989c\") pod \"redhat-marketplace-smtjc\" (UID: \"d17c89da-8862-4f3b-a801-2c56fabe069d\") " pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.198380 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17c89da-8862-4f3b-a801-2c56fabe069d-utilities\") pod \"redhat-marketplace-smtjc\" (UID: \"d17c89da-8862-4f3b-a801-2c56fabe069d\") " pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.198442 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17c89da-8862-4f3b-a801-2c56fabe069d-catalog-content\") pod \"redhat-marketplace-smtjc\" (UID: \"d17c89da-8862-4f3b-a801-2c56fabe069d\") " pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.198479 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z989c\" (UniqueName: \"kubernetes.io/projected/d17c89da-8862-4f3b-a801-2c56fabe069d-kube-api-access-z989c\") pod \"redhat-marketplace-smtjc\" (UID: \"d17c89da-8862-4f3b-a801-2c56fabe069d\") " pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.199327 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d17c89da-8862-4f3b-a801-2c56fabe069d-catalog-content\") pod \"redhat-marketplace-smtjc\" (UID: \"d17c89da-8862-4f3b-a801-2c56fabe069d\") " pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.200448 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d17c89da-8862-4f3b-a801-2c56fabe069d-utilities\") pod \"redhat-marketplace-smtjc\" (UID: \"d17c89da-8862-4f3b-a801-2c56fabe069d\") " pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.208126 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-46hsv"] Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.209161 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.211998 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.220637 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z989c\" (UniqueName: \"kubernetes.io/projected/d17c89da-8862-4f3b-a801-2c56fabe069d-kube-api-access-z989c\") pod \"redhat-marketplace-smtjc\" (UID: \"d17c89da-8862-4f3b-a801-2c56fabe069d\") " pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.221095 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46hsv"] Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.299299 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krmfh\" (UniqueName: \"kubernetes.io/projected/20519b63-2a55-4fd8-8ed9-d964eca67d43-kube-api-access-krmfh\") pod \"certified-operators-46hsv\" (UID: \"20519b63-2a55-4fd8-8ed9-d964eca67d43\") " pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.299368 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20519b63-2a55-4fd8-8ed9-d964eca67d43-utilities\") pod \"certified-operators-46hsv\" (UID: \"20519b63-2a55-4fd8-8ed9-d964eca67d43\") " pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.299461 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20519b63-2a55-4fd8-8ed9-d964eca67d43-catalog-content\") pod \"certified-operators-46hsv\" (UID: \"20519b63-2a55-4fd8-8ed9-d964eca67d43\") " pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.335010 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.400761 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krmfh\" (UniqueName: \"kubernetes.io/projected/20519b63-2a55-4fd8-8ed9-d964eca67d43-kube-api-access-krmfh\") pod \"certified-operators-46hsv\" (UID: \"20519b63-2a55-4fd8-8ed9-d964eca67d43\") " pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.401116 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20519b63-2a55-4fd8-8ed9-d964eca67d43-utilities\") pod \"certified-operators-46hsv\" (UID: \"20519b63-2a55-4fd8-8ed9-d964eca67d43\") " pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.401165 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20519b63-2a55-4fd8-8ed9-d964eca67d43-catalog-content\") pod \"certified-operators-46hsv\" (UID: \"20519b63-2a55-4fd8-8ed9-d964eca67d43\") " pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.401879 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20519b63-2a55-4fd8-8ed9-d964eca67d43-utilities\") pod \"certified-operators-46hsv\" (UID: \"20519b63-2a55-4fd8-8ed9-d964eca67d43\") " pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.401928 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20519b63-2a55-4fd8-8ed9-d964eca67d43-catalog-content\") pod \"certified-operators-46hsv\" (UID: \"20519b63-2a55-4fd8-8ed9-d964eca67d43\") " pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.418663 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krmfh\" (UniqueName: \"kubernetes.io/projected/20519b63-2a55-4fd8-8ed9-d964eca67d43-kube-api-access-krmfh\") pod \"certified-operators-46hsv\" (UID: \"20519b63-2a55-4fd8-8ed9-d964eca67d43\") " pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.550043 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.708637 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-smtjc"] Feb 04 11:34:18 crc kubenswrapper[4728]: I0204 11:34:18.744505 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46hsv"] Feb 04 11:34:18 crc kubenswrapper[4728]: W0204 11:34:18.755632 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20519b63_2a55_4fd8_8ed9_d964eca67d43.slice/crio-7a20693ba33482c2e95619560cf3ba98a40ac33ab23e53eccd7ae566c01048f8 WatchSource:0}: Error finding container 7a20693ba33482c2e95619560cf3ba98a40ac33ab23e53eccd7ae566c01048f8: Status 404 returned error can't find the container with id 7a20693ba33482c2e95619560cf3ba98a40ac33ab23e53eccd7ae566c01048f8 Feb 04 11:34:19 crc kubenswrapper[4728]: I0204 11:34:19.600650 4728 generic.go:334] "Generic (PLEG): container finished" podID="d17c89da-8862-4f3b-a801-2c56fabe069d" containerID="eecb3fdb5bb75504db571586ea78042319ed5e2c53495e96a7b119671218725a" exitCode=0 Feb 04 11:34:19 crc kubenswrapper[4728]: I0204 11:34:19.600720 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smtjc" event={"ID":"d17c89da-8862-4f3b-a801-2c56fabe069d","Type":"ContainerDied","Data":"eecb3fdb5bb75504db571586ea78042319ed5e2c53495e96a7b119671218725a"} Feb 04 11:34:19 crc kubenswrapper[4728]: I0204 11:34:19.600747 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smtjc" event={"ID":"d17c89da-8862-4f3b-a801-2c56fabe069d","Type":"ContainerStarted","Data":"72643f1f0529075f8ff2e9969ef0cc55f7a3da7c5c53d5aa23c95a21d25b8e37"} Feb 04 11:34:19 crc kubenswrapper[4728]: I0204 11:34:19.603091 4728 generic.go:334] "Generic (PLEG): container finished" podID="20519b63-2a55-4fd8-8ed9-d964eca67d43" containerID="9013fd6d808387e7c507aefb0416c8b1c829f9d0a70129c9d5b5a34eecef7813" exitCode=0 Feb 04 11:34:19 crc kubenswrapper[4728]: I0204 11:34:19.603298 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46hsv" event={"ID":"20519b63-2a55-4fd8-8ed9-d964eca67d43","Type":"ContainerDied","Data":"9013fd6d808387e7c507aefb0416c8b1c829f9d0a70129c9d5b5a34eecef7813"} Feb 04 11:34:19 crc kubenswrapper[4728]: I0204 11:34:19.603433 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46hsv" event={"ID":"20519b63-2a55-4fd8-8ed9-d964eca67d43","Type":"ContainerStarted","Data":"7a20693ba33482c2e95619560cf3ba98a40ac33ab23e53eccd7ae566c01048f8"} Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.409608 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g9rjz"] Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.429620 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.432154 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.439044 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9rjz"] Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.613899 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-htcv9"] Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.615564 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.617627 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htcv9"] Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.617664 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smtjc" event={"ID":"d17c89da-8862-4f3b-a801-2c56fabe069d","Type":"ContainerDied","Data":"0a796f0426c7af23145f76ad43932e7c1a819a5e5d3232a253d9e2d9551e3514"} Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.617671 4728 generic.go:334] "Generic (PLEG): container finished" podID="d17c89da-8862-4f3b-a801-2c56fabe069d" containerID="0a796f0426c7af23145f76ad43932e7c1a819a5e5d3232a253d9e2d9551e3514" exitCode=0 Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.617723 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.628972 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g274x\" (UniqueName: \"kubernetes.io/projected/0e4e9ddb-3fa7-445f-9a8a-662816c760c7-kube-api-access-g274x\") pod \"community-operators-htcv9\" (UID: \"0e4e9ddb-3fa7-445f-9a8a-662816c760c7\") " pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.629014 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4e9ddb-3fa7-445f-9a8a-662816c760c7-catalog-content\") pod \"community-operators-htcv9\" (UID: \"0e4e9ddb-3fa7-445f-9a8a-662816c760c7\") " pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.629031 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4e9ddb-3fa7-445f-9a8a-662816c760c7-utilities\") pod \"community-operators-htcv9\" (UID: \"0e4e9ddb-3fa7-445f-9a8a-662816c760c7\") " pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.629048 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adee048d-8c91-4807-bb6f-84bb4af68a27-catalog-content\") pod \"redhat-operators-g9rjz\" (UID: \"adee048d-8c91-4807-bb6f-84bb4af68a27\") " pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.629077 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ppn7\" (UniqueName: \"kubernetes.io/projected/adee048d-8c91-4807-bb6f-84bb4af68a27-kube-api-access-5ppn7\") pod \"redhat-operators-g9rjz\" (UID: \"adee048d-8c91-4807-bb6f-84bb4af68a27\") " pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.629094 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adee048d-8c91-4807-bb6f-84bb4af68a27-utilities\") pod \"redhat-operators-g9rjz\" (UID: \"adee048d-8c91-4807-bb6f-84bb4af68a27\") " pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.730318 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g274x\" (UniqueName: \"kubernetes.io/projected/0e4e9ddb-3fa7-445f-9a8a-662816c760c7-kube-api-access-g274x\") pod \"community-operators-htcv9\" (UID: \"0e4e9ddb-3fa7-445f-9a8a-662816c760c7\") " pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.730384 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4e9ddb-3fa7-445f-9a8a-662816c760c7-catalog-content\") pod \"community-operators-htcv9\" (UID: \"0e4e9ddb-3fa7-445f-9a8a-662816c760c7\") " pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.730410 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4e9ddb-3fa7-445f-9a8a-662816c760c7-utilities\") pod \"community-operators-htcv9\" (UID: \"0e4e9ddb-3fa7-445f-9a8a-662816c760c7\") " pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.730433 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adee048d-8c91-4807-bb6f-84bb4af68a27-catalog-content\") pod \"redhat-operators-g9rjz\" (UID: \"adee048d-8c91-4807-bb6f-84bb4af68a27\") " pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.730469 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ppn7\" (UniqueName: \"kubernetes.io/projected/adee048d-8c91-4807-bb6f-84bb4af68a27-kube-api-access-5ppn7\") pod \"redhat-operators-g9rjz\" (UID: \"adee048d-8c91-4807-bb6f-84bb4af68a27\") " pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.730493 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adee048d-8c91-4807-bb6f-84bb4af68a27-utilities\") pod \"redhat-operators-g9rjz\" (UID: \"adee048d-8c91-4807-bb6f-84bb4af68a27\") " pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.730842 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4e9ddb-3fa7-445f-9a8a-662816c760c7-utilities\") pod \"community-operators-htcv9\" (UID: \"0e4e9ddb-3fa7-445f-9a8a-662816c760c7\") " pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.730895 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adee048d-8c91-4807-bb6f-84bb4af68a27-catalog-content\") pod \"redhat-operators-g9rjz\" (UID: \"adee048d-8c91-4807-bb6f-84bb4af68a27\") " pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.730935 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4e9ddb-3fa7-445f-9a8a-662816c760c7-catalog-content\") pod \"community-operators-htcv9\" (UID: \"0e4e9ddb-3fa7-445f-9a8a-662816c760c7\") " pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.730966 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adee048d-8c91-4807-bb6f-84bb4af68a27-utilities\") pod \"redhat-operators-g9rjz\" (UID: \"adee048d-8c91-4807-bb6f-84bb4af68a27\") " pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.748133 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g274x\" (UniqueName: \"kubernetes.io/projected/0e4e9ddb-3fa7-445f-9a8a-662816c760c7-kube-api-access-g274x\") pod \"community-operators-htcv9\" (UID: \"0e4e9ddb-3fa7-445f-9a8a-662816c760c7\") " pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.748155 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ppn7\" (UniqueName: \"kubernetes.io/projected/adee048d-8c91-4807-bb6f-84bb4af68a27-kube-api-access-5ppn7\") pod \"redhat-operators-g9rjz\" (UID: \"adee048d-8c91-4807-bb6f-84bb4af68a27\") " pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.749531 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.933225 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:20 crc kubenswrapper[4728]: I0204 11:34:20.956336 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9rjz"] Feb 04 11:34:20 crc kubenswrapper[4728]: W0204 11:34:20.963626 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadee048d_8c91_4807_bb6f_84bb4af68a27.slice/crio-259fb2f0099fa7bbeba17aaa8e01ec0dd26040fa94767d27ed6594a3eb214adb WatchSource:0}: Error finding container 259fb2f0099fa7bbeba17aaa8e01ec0dd26040fa94767d27ed6594a3eb214adb: Status 404 returned error can't find the container with id 259fb2f0099fa7bbeba17aaa8e01ec0dd26040fa94767d27ed6594a3eb214adb Feb 04 11:34:21 crc kubenswrapper[4728]: I0204 11:34:21.099456 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htcv9"] Feb 04 11:34:21 crc kubenswrapper[4728]: W0204 11:34:21.111602 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e4e9ddb_3fa7_445f_9a8a_662816c760c7.slice/crio-fa79e55ae3bfa47eed9509555b424b6b301c759c742d821d298cd64d03e89a7b WatchSource:0}: Error finding container fa79e55ae3bfa47eed9509555b424b6b301c759c742d821d298cd64d03e89a7b: Status 404 returned error can't find the container with id fa79e55ae3bfa47eed9509555b424b6b301c759c742d821d298cd64d03e89a7b Feb 04 11:34:21 crc kubenswrapper[4728]: I0204 11:34:21.623436 4728 generic.go:334] "Generic (PLEG): container finished" podID="adee048d-8c91-4807-bb6f-84bb4af68a27" containerID="94388e5407478cdac969e34c0e59eb63647a5c7a6cb237fd4c6d28d371d419f5" exitCode=0 Feb 04 11:34:21 crc kubenswrapper[4728]: I0204 11:34:21.623548 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9rjz" event={"ID":"adee048d-8c91-4807-bb6f-84bb4af68a27","Type":"ContainerDied","Data":"94388e5407478cdac969e34c0e59eb63647a5c7a6cb237fd4c6d28d371d419f5"} Feb 04 11:34:21 crc kubenswrapper[4728]: I0204 11:34:21.623720 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9rjz" event={"ID":"adee048d-8c91-4807-bb6f-84bb4af68a27","Type":"ContainerStarted","Data":"259fb2f0099fa7bbeba17aaa8e01ec0dd26040fa94767d27ed6594a3eb214adb"} Feb 04 11:34:21 crc kubenswrapper[4728]: I0204 11:34:21.630064 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smtjc" event={"ID":"d17c89da-8862-4f3b-a801-2c56fabe069d","Type":"ContainerStarted","Data":"8a27602ff5ca0893334fed9775efab8124dafcce602aff7fe6353f8084da5c78"} Feb 04 11:34:21 crc kubenswrapper[4728]: I0204 11:34:21.632113 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e4e9ddb-3fa7-445f-9a8a-662816c760c7" containerID="114715f1d470d2e5a0ec2cb53c155662502b5fd4882d0a6ba54a5293e459669b" exitCode=0 Feb 04 11:34:21 crc kubenswrapper[4728]: I0204 11:34:21.632164 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htcv9" event={"ID":"0e4e9ddb-3fa7-445f-9a8a-662816c760c7","Type":"ContainerDied","Data":"114715f1d470d2e5a0ec2cb53c155662502b5fd4882d0a6ba54a5293e459669b"} Feb 04 11:34:21 crc kubenswrapper[4728]: I0204 11:34:21.632189 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htcv9" event={"ID":"0e4e9ddb-3fa7-445f-9a8a-662816c760c7","Type":"ContainerStarted","Data":"fa79e55ae3bfa47eed9509555b424b6b301c759c742d821d298cd64d03e89a7b"} Feb 04 11:34:21 crc kubenswrapper[4728]: I0204 11:34:21.673776 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-smtjc" podStartSLOduration=3.10932818 podStartE2EDuration="4.673743674s" podCreationTimestamp="2026-02-04 11:34:17 +0000 UTC" firstStartedPulling="2026-02-04 11:34:19.603936624 +0000 UTC m=+408.746641009" lastFinishedPulling="2026-02-04 11:34:21.168352118 +0000 UTC m=+410.311056503" observedRunningTime="2026-02-04 11:34:21.657048873 +0000 UTC m=+410.799753278" watchObservedRunningTime="2026-02-04 11:34:21.673743674 +0000 UTC m=+410.816448049" Feb 04 11:34:24 crc kubenswrapper[4728]: I0204 11:34:24.647401 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e4e9ddb-3fa7-445f-9a8a-662816c760c7" containerID="9990433d830cc85812153e35e76c5952f92ebc8a79ff4a871a5e85a0e266160f" exitCode=0 Feb 04 11:34:24 crc kubenswrapper[4728]: I0204 11:34:24.647479 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htcv9" event={"ID":"0e4e9ddb-3fa7-445f-9a8a-662816c760c7","Type":"ContainerDied","Data":"9990433d830cc85812153e35e76c5952f92ebc8a79ff4a871a5e85a0e266160f"} Feb 04 11:34:24 crc kubenswrapper[4728]: I0204 11:34:24.650228 4728 generic.go:334] "Generic (PLEG): container finished" podID="adee048d-8c91-4807-bb6f-84bb4af68a27" containerID="aeb2e0bb6f7ea8d16c40ae42507d1d024b938d5a7c627c9dd0bc61d92b87120a" exitCode=0 Feb 04 11:34:24 crc kubenswrapper[4728]: I0204 11:34:24.650448 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9rjz" event={"ID":"adee048d-8c91-4807-bb6f-84bb4af68a27","Type":"ContainerDied","Data":"aeb2e0bb6f7ea8d16c40ae42507d1d024b938d5a7c627c9dd0bc61d92b87120a"} Feb 04 11:34:24 crc kubenswrapper[4728]: I0204 11:34:24.658136 4728 generic.go:334] "Generic (PLEG): container finished" podID="20519b63-2a55-4fd8-8ed9-d964eca67d43" containerID="e537bf3aa4d177b6680acde26bea0beadf3d6c79a81d2f3cbb3ff66ceb219f82" exitCode=0 Feb 04 11:34:24 crc kubenswrapper[4728]: I0204 11:34:24.658196 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46hsv" event={"ID":"20519b63-2a55-4fd8-8ed9-d964eca67d43","Type":"ContainerDied","Data":"e537bf3aa4d177b6680acde26bea0beadf3d6c79a81d2f3cbb3ff66ceb219f82"} Feb 04 11:34:26 crc kubenswrapper[4728]: I0204 11:34:26.671496 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46hsv" event={"ID":"20519b63-2a55-4fd8-8ed9-d964eca67d43","Type":"ContainerStarted","Data":"17e9afc21ba3df96713f9fa47369a6411a8a347441389c32132894f69ed6265d"} Feb 04 11:34:26 crc kubenswrapper[4728]: I0204 11:34:26.675003 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htcv9" event={"ID":"0e4e9ddb-3fa7-445f-9a8a-662816c760c7","Type":"ContainerStarted","Data":"25d1ee712a1ffdaff28183c802de87c9628aa842b8d88d3b5f0f493352103ab2"} Feb 04 11:34:26 crc kubenswrapper[4728]: I0204 11:34:26.676877 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9rjz" event={"ID":"adee048d-8c91-4807-bb6f-84bb4af68a27","Type":"ContainerStarted","Data":"6815e58994356d1e091c8e6ea9d63db52635812eb41f838ff48541c7f074cc59"} Feb 04 11:34:26 crc kubenswrapper[4728]: I0204 11:34:26.687893 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-46hsv" podStartSLOduration=3.191022704 podStartE2EDuration="8.687874244s" podCreationTimestamp="2026-02-04 11:34:18 +0000 UTC" firstStartedPulling="2026-02-04 11:34:19.604417287 +0000 UTC m=+408.747121682" lastFinishedPulling="2026-02-04 11:34:25.101268837 +0000 UTC m=+414.243973222" observedRunningTime="2026-02-04 11:34:26.687494524 +0000 UTC m=+415.830198919" watchObservedRunningTime="2026-02-04 11:34:26.687874244 +0000 UTC m=+415.830578629" Feb 04 11:34:26 crc kubenswrapper[4728]: I0204 11:34:26.707559 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g9rjz" podStartSLOduration=3.2988514540000002 podStartE2EDuration="6.707540272s" podCreationTimestamp="2026-02-04 11:34:20 +0000 UTC" firstStartedPulling="2026-02-04 11:34:21.625936588 +0000 UTC m=+410.768640973" lastFinishedPulling="2026-02-04 11:34:25.034625406 +0000 UTC m=+414.177329791" observedRunningTime="2026-02-04 11:34:26.703767155 +0000 UTC m=+415.846471540" watchObservedRunningTime="2026-02-04 11:34:26.707540272 +0000 UTC m=+415.850244647" Feb 04 11:34:26 crc kubenswrapper[4728]: I0204 11:34:26.721590 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-htcv9" podStartSLOduration=3.06904236 podStartE2EDuration="6.721572535s" podCreationTimestamp="2026-02-04 11:34:20 +0000 UTC" firstStartedPulling="2026-02-04 11:34:21.633479564 +0000 UTC m=+410.776183949" lastFinishedPulling="2026-02-04 11:34:25.286009739 +0000 UTC m=+414.428714124" observedRunningTime="2026-02-04 11:34:26.721111512 +0000 UTC m=+415.863815897" watchObservedRunningTime="2026-02-04 11:34:26.721572535 +0000 UTC m=+415.864276920" Feb 04 11:34:28 crc kubenswrapper[4728]: I0204 11:34:28.336190 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:28 crc kubenswrapper[4728]: I0204 11:34:28.336557 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:28 crc kubenswrapper[4728]: I0204 11:34:28.387472 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:28 crc kubenswrapper[4728]: I0204 11:34:28.550432 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:28 crc kubenswrapper[4728]: I0204 11:34:28.550484 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:28 crc kubenswrapper[4728]: I0204 11:34:28.589674 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:28 crc kubenswrapper[4728]: I0204 11:34:28.725282 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-smtjc" Feb 04 11:34:30 crc kubenswrapper[4728]: I0204 11:34:30.751093 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:30 crc kubenswrapper[4728]: I0204 11:34:30.751484 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:30 crc kubenswrapper[4728]: I0204 11:34:30.933406 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:30 crc kubenswrapper[4728]: I0204 11:34:30.933856 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:30 crc kubenswrapper[4728]: I0204 11:34:30.976108 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:31 crc kubenswrapper[4728]: I0204 11:34:31.785009 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-htcv9" Feb 04 11:34:31 crc kubenswrapper[4728]: I0204 11:34:31.805292 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g9rjz" podUID="adee048d-8c91-4807-bb6f-84bb4af68a27" containerName="registry-server" probeResult="failure" output=< Feb 04 11:34:31 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 04 11:34:31 crc kubenswrapper[4728]: > Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.391880 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" podUID="4983cdcb-9bb3-41d2-9164-f24ee5753562" containerName="registry" containerID="cri-o://ffe232dd5d099115470ad128bfacdcae2357a5a8f7c8af51d67abd93f690b786" gracePeriod=30 Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.718964 4728 generic.go:334] "Generic (PLEG): container finished" podID="4983cdcb-9bb3-41d2-9164-f24ee5753562" containerID="ffe232dd5d099115470ad128bfacdcae2357a5a8f7c8af51d67abd93f690b786" exitCode=0 Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.719113 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" event={"ID":"4983cdcb-9bb3-41d2-9164-f24ee5753562","Type":"ContainerDied","Data":"ffe232dd5d099115470ad128bfacdcae2357a5a8f7c8af51d67abd93f690b786"} Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.719340 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" event={"ID":"4983cdcb-9bb3-41d2-9164-f24ee5753562","Type":"ContainerDied","Data":"42fee2d1dd36558635bb00047da35784f58add06c501a6564a3e8a742cb9e146"} Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.719364 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42fee2d1dd36558635bb00047da35784f58add06c501a6564a3e8a742cb9e146" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.720849 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.853452 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4983cdcb-9bb3-41d2-9164-f24ee5753562\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.853524 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4983cdcb-9bb3-41d2-9164-f24ee5753562-ca-trust-extracted\") pod \"4983cdcb-9bb3-41d2-9164-f24ee5753562\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.853568 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-bound-sa-token\") pod \"4983cdcb-9bb3-41d2-9164-f24ee5753562\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.853637 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l7kf\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-kube-api-access-5l7kf\") pod \"4983cdcb-9bb3-41d2-9164-f24ee5753562\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.853697 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-tls\") pod \"4983cdcb-9bb3-41d2-9164-f24ee5753562\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.853729 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-certificates\") pod \"4983cdcb-9bb3-41d2-9164-f24ee5753562\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.853805 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-trusted-ca\") pod \"4983cdcb-9bb3-41d2-9164-f24ee5753562\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.853866 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4983cdcb-9bb3-41d2-9164-f24ee5753562-installation-pull-secrets\") pod \"4983cdcb-9bb3-41d2-9164-f24ee5753562\" (UID: \"4983cdcb-9bb3-41d2-9164-f24ee5753562\") " Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.854832 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4983cdcb-9bb3-41d2-9164-f24ee5753562" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.854994 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4983cdcb-9bb3-41d2-9164-f24ee5753562" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.859119 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4983cdcb-9bb3-41d2-9164-f24ee5753562" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.859532 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-kube-api-access-5l7kf" (OuterVolumeSpecName: "kube-api-access-5l7kf") pod "4983cdcb-9bb3-41d2-9164-f24ee5753562" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562"). InnerVolumeSpecName "kube-api-access-5l7kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.861061 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4983cdcb-9bb3-41d2-9164-f24ee5753562-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4983cdcb-9bb3-41d2-9164-f24ee5753562" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.861300 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4983cdcb-9bb3-41d2-9164-f24ee5753562" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.867250 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4983cdcb-9bb3-41d2-9164-f24ee5753562" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.873775 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4983cdcb-9bb3-41d2-9164-f24ee5753562-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4983cdcb-9bb3-41d2-9164-f24ee5753562" (UID: "4983cdcb-9bb3-41d2-9164-f24ee5753562"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.955744 4728 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.956064 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l7kf\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-kube-api-access-5l7kf\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.956078 4728 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.956087 4728 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.956096 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4983cdcb-9bb3-41d2-9164-f24ee5753562-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.956107 4728 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4983cdcb-9bb3-41d2-9164-f24ee5753562-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:33 crc kubenswrapper[4728]: I0204 11:34:33.956117 4728 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4983cdcb-9bb3-41d2-9164-f24ee5753562-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 04 11:34:34 crc kubenswrapper[4728]: I0204 11:34:34.730920 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-57d49" Feb 04 11:34:34 crc kubenswrapper[4728]: I0204 11:34:34.759737 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57d49"] Feb 04 11:34:34 crc kubenswrapper[4728]: I0204 11:34:34.762857 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57d49"] Feb 04 11:34:35 crc kubenswrapper[4728]: I0204 11:34:35.448382 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:34:35 crc kubenswrapper[4728]: I0204 11:34:35.448454 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:34:35 crc kubenswrapper[4728]: I0204 11:34:35.448500 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:34:35 crc kubenswrapper[4728]: I0204 11:34:35.449119 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70dd38e437063854e1d19f2cb326f62fcfbcc9c4a621e22232ef875b06d7434d"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 11:34:35 crc kubenswrapper[4728]: I0204 11:34:35.449189 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://70dd38e437063854e1d19f2cb326f62fcfbcc9c4a621e22232ef875b06d7434d" gracePeriod=600 Feb 04 11:34:35 crc kubenswrapper[4728]: I0204 11:34:35.559879 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4983cdcb-9bb3-41d2-9164-f24ee5753562" path="/var/lib/kubelet/pods/4983cdcb-9bb3-41d2-9164-f24ee5753562/volumes" Feb 04 11:34:38 crc kubenswrapper[4728]: I0204 11:34:38.600194 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-46hsv" Feb 04 11:34:39 crc kubenswrapper[4728]: I0204 11:34:39.757420 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="70dd38e437063854e1d19f2cb326f62fcfbcc9c4a621e22232ef875b06d7434d" exitCode=0 Feb 04 11:34:39 crc kubenswrapper[4728]: I0204 11:34:39.757474 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"70dd38e437063854e1d19f2cb326f62fcfbcc9c4a621e22232ef875b06d7434d"} Feb 04 11:34:39 crc kubenswrapper[4728]: I0204 11:34:39.757512 4728 scope.go:117] "RemoveContainer" containerID="3d97b3e9b84ae22132bec9603ab8212d9270896d53a0a425dc54e104101916b4" Feb 04 11:34:40 crc kubenswrapper[4728]: I0204 11:34:40.765704 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"44bf5f747d4965ce6618c46b58c4eacb171a54b6f11bb718ba6061de1fa3a0cc"} Feb 04 11:34:40 crc kubenswrapper[4728]: I0204 11:34:40.794447 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:34:40 crc kubenswrapper[4728]: I0204 11:34:40.850805 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g9rjz" Feb 04 11:36:31 crc kubenswrapper[4728]: I0204 11:36:31.737014 4728 scope.go:117] "RemoveContainer" containerID="ffe232dd5d099115470ad128bfacdcae2357a5a8f7c8af51d67abd93f690b786" Feb 04 11:37:05 crc kubenswrapper[4728]: I0204 11:37:05.448366 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:37:05 crc kubenswrapper[4728]: I0204 11:37:05.448952 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:37:31 crc kubenswrapper[4728]: I0204 11:37:31.773412 4728 scope.go:117] "RemoveContainer" containerID="d325bf425f3e0a82a14fc0af6e41d65b58f54c35ab7a9e23b66dc27230528e91" Feb 04 11:37:31 crc kubenswrapper[4728]: I0204 11:37:31.799156 4728 scope.go:117] "RemoveContainer" containerID="51f4cf793ddc9bb801c4aecaeef9c1599d389983d83ec4452b73ace2ea47180e" Feb 04 11:37:35 crc kubenswrapper[4728]: I0204 11:37:35.448096 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:37:35 crc kubenswrapper[4728]: I0204 11:37:35.448477 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:38:05 crc kubenswrapper[4728]: I0204 11:38:05.448704 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:38:05 crc kubenswrapper[4728]: I0204 11:38:05.449434 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:38:05 crc kubenswrapper[4728]: I0204 11:38:05.449488 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:38:05 crc kubenswrapper[4728]: I0204 11:38:05.450318 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44bf5f747d4965ce6618c46b58c4eacb171a54b6f11bb718ba6061de1fa3a0cc"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 11:38:05 crc kubenswrapper[4728]: I0204 11:38:05.450499 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://44bf5f747d4965ce6618c46b58c4eacb171a54b6f11bb718ba6061de1fa3a0cc" gracePeriod=600 Feb 04 11:38:05 crc kubenswrapper[4728]: I0204 11:38:05.873263 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="44bf5f747d4965ce6618c46b58c4eacb171a54b6f11bb718ba6061de1fa3a0cc" exitCode=0 Feb 04 11:38:05 crc kubenswrapper[4728]: I0204 11:38:05.873326 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"44bf5f747d4965ce6618c46b58c4eacb171a54b6f11bb718ba6061de1fa3a0cc"} Feb 04 11:38:05 crc kubenswrapper[4728]: I0204 11:38:05.873594 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"d1141676880e32a0c8de5aba6aaf202ec56fa7791680367a5b1bd8fc7c075b2b"} Feb 04 11:38:05 crc kubenswrapper[4728]: I0204 11:38:05.873615 4728 scope.go:117] "RemoveContainer" containerID="70dd38e437063854e1d19f2cb326f62fcfbcc9c4a621e22232ef875b06d7434d" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.684305 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5d4gb"] Feb 04 11:39:28 crc kubenswrapper[4728]: E0204 11:39:28.686148 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4983cdcb-9bb3-41d2-9164-f24ee5753562" containerName="registry" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.686238 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4983cdcb-9bb3-41d2-9164-f24ee5753562" containerName="registry" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.686424 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4983cdcb-9bb3-41d2-9164-f24ee5753562" containerName="registry" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.686968 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5d4gb" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.689414 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.689620 4728 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ss4nf" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.689748 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.697776 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5d4gb"] Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.705886 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-9qsm6"] Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.706678 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-9qsm6" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.709984 4728 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zzbtq" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.713349 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4qfrz"] Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.714199 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4qfrz" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.715996 4728 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-r46lw" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.728127 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4qfrz"] Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.734265 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-9qsm6"] Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.829891 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5fxn\" (UniqueName: \"kubernetes.io/projected/0985f153-e731-4ee3-8c41-315179a557dc-kube-api-access-c5fxn\") pod \"cert-manager-cainjector-cf98fcc89-5d4gb\" (UID: \"0985f153-e731-4ee3-8c41-315179a557dc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5d4gb" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.829941 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc8t2\" (UniqueName: \"kubernetes.io/projected/58edae62-bdc6-49b2-89bc-a2e0ff5184d6-kube-api-access-jc8t2\") pod \"cert-manager-858654f9db-9qsm6\" (UID: \"58edae62-bdc6-49b2-89bc-a2e0ff5184d6\") " pod="cert-manager/cert-manager-858654f9db-9qsm6" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.829963 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcbd7\" (UniqueName: \"kubernetes.io/projected/cc113635-ef4c-4427-b4f8-92def9d5c19f-kube-api-access-vcbd7\") pod \"cert-manager-webhook-687f57d79b-4qfrz\" (UID: \"cc113635-ef4c-4427-b4f8-92def9d5c19f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4qfrz" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.931738 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5fxn\" (UniqueName: \"kubernetes.io/projected/0985f153-e731-4ee3-8c41-315179a557dc-kube-api-access-c5fxn\") pod \"cert-manager-cainjector-cf98fcc89-5d4gb\" (UID: \"0985f153-e731-4ee3-8c41-315179a557dc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5d4gb" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.931827 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc8t2\" (UniqueName: \"kubernetes.io/projected/58edae62-bdc6-49b2-89bc-a2e0ff5184d6-kube-api-access-jc8t2\") pod \"cert-manager-858654f9db-9qsm6\" (UID: \"58edae62-bdc6-49b2-89bc-a2e0ff5184d6\") " pod="cert-manager/cert-manager-858654f9db-9qsm6" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.932003 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcbd7\" (UniqueName: \"kubernetes.io/projected/cc113635-ef4c-4427-b4f8-92def9d5c19f-kube-api-access-vcbd7\") pod \"cert-manager-webhook-687f57d79b-4qfrz\" (UID: \"cc113635-ef4c-4427-b4f8-92def9d5c19f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4qfrz" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.955813 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc8t2\" (UniqueName: \"kubernetes.io/projected/58edae62-bdc6-49b2-89bc-a2e0ff5184d6-kube-api-access-jc8t2\") pod \"cert-manager-858654f9db-9qsm6\" (UID: \"58edae62-bdc6-49b2-89bc-a2e0ff5184d6\") " pod="cert-manager/cert-manager-858654f9db-9qsm6" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.955840 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcbd7\" (UniqueName: \"kubernetes.io/projected/cc113635-ef4c-4427-b4f8-92def9d5c19f-kube-api-access-vcbd7\") pod \"cert-manager-webhook-687f57d79b-4qfrz\" (UID: \"cc113635-ef4c-4427-b4f8-92def9d5c19f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4qfrz" Feb 04 11:39:28 crc kubenswrapper[4728]: I0204 11:39:28.955989 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5fxn\" (UniqueName: \"kubernetes.io/projected/0985f153-e731-4ee3-8c41-315179a557dc-kube-api-access-c5fxn\") pod \"cert-manager-cainjector-cf98fcc89-5d4gb\" (UID: \"0985f153-e731-4ee3-8c41-315179a557dc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5d4gb" Feb 04 11:39:29 crc kubenswrapper[4728]: I0204 11:39:29.003891 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5d4gb" Feb 04 11:39:29 crc kubenswrapper[4728]: I0204 11:39:29.028383 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-9qsm6" Feb 04 11:39:29 crc kubenswrapper[4728]: I0204 11:39:29.039547 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4qfrz" Feb 04 11:39:29 crc kubenswrapper[4728]: I0204 11:39:29.237744 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5d4gb"] Feb 04 11:39:29 crc kubenswrapper[4728]: W0204 11:39:29.243937 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0985f153_e731_4ee3_8c41_315179a557dc.slice/crio-d25ca96b274def3ddc19b352284e3f4837f2dbfb861701abf6012c49e9b495fd WatchSource:0}: Error finding container d25ca96b274def3ddc19b352284e3f4837f2dbfb861701abf6012c49e9b495fd: Status 404 returned error can't find the container with id d25ca96b274def3ddc19b352284e3f4837f2dbfb861701abf6012c49e9b495fd Feb 04 11:39:29 crc kubenswrapper[4728]: I0204 11:39:29.248285 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 11:39:29 crc kubenswrapper[4728]: I0204 11:39:29.268287 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4qfrz"] Feb 04 11:39:29 crc kubenswrapper[4728]: I0204 11:39:29.501568 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-9qsm6"] Feb 04 11:39:29 crc kubenswrapper[4728]: W0204 11:39:29.510412 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58edae62_bdc6_49b2_89bc_a2e0ff5184d6.slice/crio-dd058f6b3e8c636170e0b1e6677fb88924cfdceea67c5ba1f6782d35a14ad462 WatchSource:0}: Error finding container dd058f6b3e8c636170e0b1e6677fb88924cfdceea67c5ba1f6782d35a14ad462: Status 404 returned error can't find the container with id dd058f6b3e8c636170e0b1e6677fb88924cfdceea67c5ba1f6782d35a14ad462 Feb 04 11:39:29 crc kubenswrapper[4728]: I0204 11:39:29.858866 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5d4gb" event={"ID":"0985f153-e731-4ee3-8c41-315179a557dc","Type":"ContainerStarted","Data":"d25ca96b274def3ddc19b352284e3f4837f2dbfb861701abf6012c49e9b495fd"} Feb 04 11:39:29 crc kubenswrapper[4728]: I0204 11:39:29.859824 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-9qsm6" event={"ID":"58edae62-bdc6-49b2-89bc-a2e0ff5184d6","Type":"ContainerStarted","Data":"dd058f6b3e8c636170e0b1e6677fb88924cfdceea67c5ba1f6782d35a14ad462"} Feb 04 11:39:29 crc kubenswrapper[4728]: I0204 11:39:29.860638 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4qfrz" event={"ID":"cc113635-ef4c-4427-b4f8-92def9d5c19f","Type":"ContainerStarted","Data":"359d213f8b6f0cefe025440501048070683b2a2edd8fad790a2de81ef3dedd33"} Feb 04 11:39:32 crc kubenswrapper[4728]: I0204 11:39:32.876227 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4qfrz" event={"ID":"cc113635-ef4c-4427-b4f8-92def9d5c19f","Type":"ContainerStarted","Data":"3e49e72d0bc6b02821ca027258a9c87ead4dad0d09f0153a071d4c83f23ed7d5"} Feb 04 11:39:32 crc kubenswrapper[4728]: I0204 11:39:32.876541 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-4qfrz" Feb 04 11:39:32 crc kubenswrapper[4728]: I0204 11:39:32.878125 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5d4gb" event={"ID":"0985f153-e731-4ee3-8c41-315179a557dc","Type":"ContainerStarted","Data":"8a0f2e5bd0d7ae8ca7613df44bcf9795d6d01d1eaa349b3442f18d6e7c41e658"} Feb 04 11:39:32 crc kubenswrapper[4728]: I0204 11:39:32.896382 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-4qfrz" podStartSLOduration=1.528514368 podStartE2EDuration="4.896363716s" podCreationTimestamp="2026-02-04 11:39:28 +0000 UTC" firstStartedPulling="2026-02-04 11:39:29.271855127 +0000 UTC m=+718.414559512" lastFinishedPulling="2026-02-04 11:39:32.639704475 +0000 UTC m=+721.782408860" observedRunningTime="2026-02-04 11:39:32.891151667 +0000 UTC m=+722.033856072" watchObservedRunningTime="2026-02-04 11:39:32.896363716 +0000 UTC m=+722.039068101" Feb 04 11:39:32 crc kubenswrapper[4728]: I0204 11:39:32.906920 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5d4gb" podStartSLOduration=1.569711472 podStartE2EDuration="4.906901098s" podCreationTimestamp="2026-02-04 11:39:28 +0000 UTC" firstStartedPulling="2026-02-04 11:39:29.247946644 +0000 UTC m=+718.390651029" lastFinishedPulling="2026-02-04 11:39:32.58513627 +0000 UTC m=+721.727840655" observedRunningTime="2026-02-04 11:39:32.903861042 +0000 UTC m=+722.046565447" watchObservedRunningTime="2026-02-04 11:39:32.906901098 +0000 UTC m=+722.049605483" Feb 04 11:39:33 crc kubenswrapper[4728]: I0204 11:39:33.885895 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-9qsm6" event={"ID":"58edae62-bdc6-49b2-89bc-a2e0ff5184d6","Type":"ContainerStarted","Data":"be6ac5ba8a861b0a19ba6d6063b35831acdf8eec874147d3592419c420819031"} Feb 04 11:39:33 crc kubenswrapper[4728]: I0204 11:39:33.908480 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-9qsm6" podStartSLOduration=1.83349636 podStartE2EDuration="5.908457612s" podCreationTimestamp="2026-02-04 11:39:28 +0000 UTC" firstStartedPulling="2026-02-04 11:39:29.512572163 +0000 UTC m=+718.655276548" lastFinishedPulling="2026-02-04 11:39:33.587533415 +0000 UTC m=+722.730237800" observedRunningTime="2026-02-04 11:39:33.902879503 +0000 UTC m=+723.045583918" watchObservedRunningTime="2026-02-04 11:39:33.908457612 +0000 UTC m=+723.051162007" Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.778136 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c6r5d"] Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.779386 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="nbdb" containerID="cri-o://ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb" gracePeriod=30 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.779502 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="sbdb" containerID="cri-o://15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931" gracePeriod=30 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.779591 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d" gracePeriod=30 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.779673 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="northd" containerID="cri-o://66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d" gracePeriod=30 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.779702 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovn-acl-logging" containerID="cri-o://c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022" gracePeriod=30 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.779686 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="kube-rbac-proxy-node" containerID="cri-o://02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b" gracePeriod=30 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.779815 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovn-controller" containerID="cri-o://51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f" gracePeriod=30 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.807346 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" containerID="cri-o://b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828" gracePeriod=30 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.920322 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dc6rd_3dbc56be-abfc-4180-870e-f4c19bd09f4b/kube-multus/2.log" Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.920764 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dc6rd_3dbc56be-abfc-4180-870e-f4c19bd09f4b/kube-multus/1.log" Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.920796 4728 generic.go:334] "Generic (PLEG): container finished" podID="3dbc56be-abfc-4180-870e-f4c19bd09f4b" containerID="c5303ece67988b48c9f7078f4f5f783e2dfa7759b80454d2a50b80d956debf57" exitCode=2 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.920845 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dc6rd" event={"ID":"3dbc56be-abfc-4180-870e-f4c19bd09f4b","Type":"ContainerDied","Data":"c5303ece67988b48c9f7078f4f5f783e2dfa7759b80454d2a50b80d956debf57"} Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.920875 4728 scope.go:117] "RemoveContainer" containerID="67b4e2d21060b4b10a96936588b1c9787d2da1a43c84356f599a361a831291ca" Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.921260 4728 scope.go:117] "RemoveContainer" containerID="c5303ece67988b48c9f7078f4f5f783e2dfa7759b80454d2a50b80d956debf57" Feb 04 11:39:38 crc kubenswrapper[4728]: E0204 11:39:38.921419 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dc6rd_openshift-multus(3dbc56be-abfc-4180-870e-f4c19bd09f4b)\"" pod="openshift-multus/multus-dc6rd" podUID="3dbc56be-abfc-4180-870e-f4c19bd09f4b" Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.928314 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/3.log" Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.931009 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovn-acl-logging/0.log" Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.931420 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovn-controller/0.log" Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.931912 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d" exitCode=0 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.931935 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b" exitCode=0 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.931944 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022" exitCode=143 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.931951 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f" exitCode=143 Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.931968 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d"} Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.931991 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b"} Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.932001 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022"} Feb 04 11:39:38 crc kubenswrapper[4728]: I0204 11:39:38.932010 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f"} Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.042446 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-4qfrz" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.068863 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/3.log" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.081031 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovn-acl-logging/0.log" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.082419 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovn-controller/0.log" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.083315 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147030 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-stt9j"] Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147216 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="northd" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147228 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="northd" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147238 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="kube-rbac-proxy-node" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147244 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="kube-rbac-proxy-node" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147250 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="kube-rbac-proxy-ovn-metrics" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147257 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="kube-rbac-proxy-ovn-metrics" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147265 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="sbdb" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147270 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="sbdb" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147280 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147286 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147294 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147299 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147309 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147315 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147322 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="nbdb" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147327 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="nbdb" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147335 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147340 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147349 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovn-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147355 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovn-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147363 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="kubecfg-setup" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147369 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="kubecfg-setup" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147375 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovn-acl-logging" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147380 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovn-acl-logging" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147461 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147469 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovn-acl-logging" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147478 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="northd" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147486 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="kube-rbac-proxy-ovn-metrics" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147493 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="sbdb" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147503 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147510 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="nbdb" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147519 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147528 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovn-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147537 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="kube-rbac-proxy-node" Feb 04 11:39:39 crc kubenswrapper[4728]: E0204 11:39:39.147670 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147680 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147822 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.147982 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" containerName="ovnkube-controller" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.149430 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.168937 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-systemd\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.168970 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.168987 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-openvswitch\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169030 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e963298-5c99-4db8-bdba-88187d4b0018-ovn-node-metrics-cert\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169070 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-ovn\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169085 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-ovn-kubernetes\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169097 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169111 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-systemd-units\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169128 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-netd\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169141 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-slash\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169171 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp28q\" (UniqueName: \"kubernetes.io/projected/0e963298-5c99-4db8-bdba-88187d4b0018-kube-api-access-tp28q\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169195 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-etc-openvswitch\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169217 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-env-overrides\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169234 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-script-lib\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169260 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-bin\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169281 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-var-lib-openvswitch\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169295 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-config\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169313 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-kubelet\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169331 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-log-socket\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169349 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-node-log\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169371 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-netns\") pod \"0e963298-5c99-4db8-bdba-88187d4b0018\" (UID: \"0e963298-5c99-4db8-bdba-88187d4b0018\") " Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169538 4728 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.169787 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170260 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170295 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170316 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-log-socket" (OuterVolumeSpecName: "log-socket") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170321 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170318 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-node-log" (OuterVolumeSpecName: "node-log") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170360 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170388 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170368 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170404 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170412 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-slash" (OuterVolumeSpecName: "host-slash") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170433 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170459 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170485 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.170645 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.171152 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.174165 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e963298-5c99-4db8-bdba-88187d4b0018-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.175264 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e963298-5c99-4db8-bdba-88187d4b0018-kube-api-access-tp28q" (OuterVolumeSpecName: "kube-api-access-tp28q") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "kube-api-access-tp28q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.185578 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0e963298-5c99-4db8-bdba-88187d4b0018" (UID: "0e963298-5c99-4db8-bdba-88187d4b0018"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270470 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-cni-bin\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270520 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-run-ovn-kubernetes\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270552 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270581 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-var-lib-openvswitch\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270609 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-run-netns\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270633 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-run-openvswitch\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270655 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d98610cd-d83d-4bf1-8f95-7975a729b776-env-overrides\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270709 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-run-systemd\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270731 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d98610cd-d83d-4bf1-8f95-7975a729b776-ovnkube-script-lib\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270855 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d98610cd-d83d-4bf1-8f95-7975a729b776-ovnkube-config\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270907 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nd56\" (UniqueName: \"kubernetes.io/projected/d98610cd-d83d-4bf1-8f95-7975a729b776-kube-api-access-9nd56\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270957 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-run-ovn\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.270982 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-slash\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271013 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-log-socket\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271036 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-kubelet\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271059 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d98610cd-d83d-4bf1-8f95-7975a729b776-ovn-node-metrics-cert\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271103 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-etc-openvswitch\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271139 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-systemd-units\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271162 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-node-log\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271181 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-cni-netd\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271272 4728 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e963298-5c99-4db8-bdba-88187d4b0018-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271342 4728 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271353 4728 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271362 4728 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271398 4728 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271406 4728 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-slash\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271416 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp28q\" (UniqueName: \"kubernetes.io/projected/0e963298-5c99-4db8-bdba-88187d4b0018-kube-api-access-tp28q\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271424 4728 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271435 4728 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271443 4728 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271452 4728 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271461 4728 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271468 4728 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e963298-5c99-4db8-bdba-88187d4b0018-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271477 4728 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271484 4728 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-log-socket\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271492 4728 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-node-log\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271500 4728 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271509 4728 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.271527 4728 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e963298-5c99-4db8-bdba-88187d4b0018-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.372889 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-etc-openvswitch\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.372952 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-systemd-units\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.372986 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-node-log\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373014 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-cni-netd\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373047 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-etc-openvswitch\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373060 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-cni-bin\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373096 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-cni-bin\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373117 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-systemd-units\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373163 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-run-ovn-kubernetes\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373127 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-run-ovn-kubernetes\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373190 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-node-log\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373215 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373316 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-var-lib-openvswitch\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373366 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-run-netns\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373135 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-cni-netd\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373436 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-run-openvswitch\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373404 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-run-openvswitch\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373472 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d98610cd-d83d-4bf1-8f95-7975a729b776-env-overrides\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373469 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-run-netns\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373471 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-var-lib-openvswitch\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373531 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-run-systemd\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373558 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d98610cd-d83d-4bf1-8f95-7975a729b776-ovnkube-script-lib\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373591 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-run-systemd\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373617 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d98610cd-d83d-4bf1-8f95-7975a729b776-ovnkube-config\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373248 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373670 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nd56\" (UniqueName: \"kubernetes.io/projected/d98610cd-d83d-4bf1-8f95-7975a729b776-kube-api-access-9nd56\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373947 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-slash\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.373974 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-slash\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.374339 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d98610cd-d83d-4bf1-8f95-7975a729b776-env-overrides\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.374421 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d98610cd-d83d-4bf1-8f95-7975a729b776-ovnkube-config\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.374477 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-run-ovn\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.374510 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-log-socket\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.374598 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-kubelet\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.374615 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d98610cd-d83d-4bf1-8f95-7975a729b776-ovn-node-metrics-cert\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.374663 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-host-kubelet\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.374549 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-run-ovn\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.374567 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d98610cd-d83d-4bf1-8f95-7975a729b776-log-socket\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.375088 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d98610cd-d83d-4bf1-8f95-7975a729b776-ovnkube-script-lib\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.378943 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d98610cd-d83d-4bf1-8f95-7975a729b776-ovn-node-metrics-cert\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.388258 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nd56\" (UniqueName: \"kubernetes.io/projected/d98610cd-d83d-4bf1-8f95-7975a729b776-kube-api-access-9nd56\") pod \"ovnkube-node-stt9j\" (UID: \"d98610cd-d83d-4bf1-8f95-7975a729b776\") " pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.461344 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.939334 4728 generic.go:334] "Generic (PLEG): container finished" podID="d98610cd-d83d-4bf1-8f95-7975a729b776" containerID="b8b96013b5f24a9fb6e79de5ddd2b8f4042b131a94d673858855d3b7893ebf14" exitCode=0 Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.939408 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" event={"ID":"d98610cd-d83d-4bf1-8f95-7975a729b776","Type":"ContainerDied","Data":"b8b96013b5f24a9fb6e79de5ddd2b8f4042b131a94d673858855d3b7893ebf14"} Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.939445 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" event={"ID":"d98610cd-d83d-4bf1-8f95-7975a729b776","Type":"ContainerStarted","Data":"feea21e09797702c3a05f2ba0abf21b0546135d31413d0c0843bdccad40f811c"} Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.941536 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dc6rd_3dbc56be-abfc-4180-870e-f4c19bd09f4b/kube-multus/2.log" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.943978 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovnkube-controller/3.log" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.950210 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovn-acl-logging/0.log" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.951086 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c6r5d_0e963298-5c99-4db8-bdba-88187d4b0018/ovn-controller/0.log" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.952964 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828" exitCode=0 Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.952989 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931" exitCode=0 Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.952999 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb" exitCode=0 Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.953008 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e963298-5c99-4db8-bdba-88187d4b0018" containerID="66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d" exitCode=0 Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.953054 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828"} Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.953083 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931"} Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.953095 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb"} Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.953110 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d"} Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.953123 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" event={"ID":"0e963298-5c99-4db8-bdba-88187d4b0018","Type":"ContainerDied","Data":"1ce83136c79a115e349d962f70ab2bb323d5948a29c94ddcd9ec8910ba9aa545"} Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.953143 4728 scope.go:117] "RemoveContainer" containerID="b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.953143 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c6r5d" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.982207 4728 scope.go:117] "RemoveContainer" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" Feb 04 11:39:39 crc kubenswrapper[4728]: I0204 11:39:39.998776 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c6r5d"] Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.002256 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c6r5d"] Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.030363 4728 scope.go:117] "RemoveContainer" containerID="15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.049417 4728 scope.go:117] "RemoveContainer" containerID="ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.063712 4728 scope.go:117] "RemoveContainer" containerID="66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.078674 4728 scope.go:117] "RemoveContainer" containerID="9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.094059 4728 scope.go:117] "RemoveContainer" containerID="02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.107807 4728 scope.go:117] "RemoveContainer" containerID="c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.127074 4728 scope.go:117] "RemoveContainer" containerID="51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.170391 4728 scope.go:117] "RemoveContainer" containerID="d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.185703 4728 scope.go:117] "RemoveContainer" containerID="b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828" Feb 04 11:39:40 crc kubenswrapper[4728]: E0204 11:39:40.186301 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828\": container with ID starting with b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828 not found: ID does not exist" containerID="b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.186354 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828"} err="failed to get container status \"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828\": rpc error: code = NotFound desc = could not find container \"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828\": container with ID starting with b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.186384 4728 scope.go:117] "RemoveContainer" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" Feb 04 11:39:40 crc kubenswrapper[4728]: E0204 11:39:40.186680 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\": container with ID starting with 4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3 not found: ID does not exist" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.186709 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3"} err="failed to get container status \"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\": rpc error: code = NotFound desc = could not find container \"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\": container with ID starting with 4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.186731 4728 scope.go:117] "RemoveContainer" containerID="15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931" Feb 04 11:39:40 crc kubenswrapper[4728]: E0204 11:39:40.187025 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\": container with ID starting with 15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931 not found: ID does not exist" containerID="15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.187056 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931"} err="failed to get container status \"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\": rpc error: code = NotFound desc = could not find container \"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\": container with ID starting with 15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.187078 4728 scope.go:117] "RemoveContainer" containerID="ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb" Feb 04 11:39:40 crc kubenswrapper[4728]: E0204 11:39:40.187350 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\": container with ID starting with ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb not found: ID does not exist" containerID="ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.187370 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb"} err="failed to get container status \"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\": rpc error: code = NotFound desc = could not find container \"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\": container with ID starting with ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.187399 4728 scope.go:117] "RemoveContainer" containerID="66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d" Feb 04 11:39:40 crc kubenswrapper[4728]: E0204 11:39:40.187686 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\": container with ID starting with 66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d not found: ID does not exist" containerID="66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.187734 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d"} err="failed to get container status \"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\": rpc error: code = NotFound desc = could not find container \"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\": container with ID starting with 66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.187819 4728 scope.go:117] "RemoveContainer" containerID="9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d" Feb 04 11:39:40 crc kubenswrapper[4728]: E0204 11:39:40.188101 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\": container with ID starting with 9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d not found: ID does not exist" containerID="9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.188142 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d"} err="failed to get container status \"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\": rpc error: code = NotFound desc = could not find container \"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\": container with ID starting with 9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.188156 4728 scope.go:117] "RemoveContainer" containerID="02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b" Feb 04 11:39:40 crc kubenswrapper[4728]: E0204 11:39:40.188401 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\": container with ID starting with 02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b not found: ID does not exist" containerID="02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.188464 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b"} err="failed to get container status \"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\": rpc error: code = NotFound desc = could not find container \"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\": container with ID starting with 02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.188481 4728 scope.go:117] "RemoveContainer" containerID="c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022" Feb 04 11:39:40 crc kubenswrapper[4728]: E0204 11:39:40.188709 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\": container with ID starting with c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022 not found: ID does not exist" containerID="c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.188747 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022"} err="failed to get container status \"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\": rpc error: code = NotFound desc = could not find container \"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\": container with ID starting with c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.188775 4728 scope.go:117] "RemoveContainer" containerID="51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f" Feb 04 11:39:40 crc kubenswrapper[4728]: E0204 11:39:40.189079 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\": container with ID starting with 51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f not found: ID does not exist" containerID="51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.189102 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f"} err="failed to get container status \"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\": rpc error: code = NotFound desc = could not find container \"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\": container with ID starting with 51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.189119 4728 scope.go:117] "RemoveContainer" containerID="d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f" Feb 04 11:39:40 crc kubenswrapper[4728]: E0204 11:39:40.189464 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\": container with ID starting with d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f not found: ID does not exist" containerID="d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.189504 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f"} err="failed to get container status \"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\": rpc error: code = NotFound desc = could not find container \"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\": container with ID starting with d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.189517 4728 scope.go:117] "RemoveContainer" containerID="b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.189770 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828"} err="failed to get container status \"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828\": rpc error: code = NotFound desc = could not find container \"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828\": container with ID starting with b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.189790 4728 scope.go:117] "RemoveContainer" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.190086 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3"} err="failed to get container status \"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\": rpc error: code = NotFound desc = could not find container \"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\": container with ID starting with 4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.190121 4728 scope.go:117] "RemoveContainer" containerID="15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.190426 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931"} err="failed to get container status \"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\": rpc error: code = NotFound desc = could not find container \"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\": container with ID starting with 15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.190452 4728 scope.go:117] "RemoveContainer" containerID="ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.190702 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb"} err="failed to get container status \"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\": rpc error: code = NotFound desc = could not find container \"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\": container with ID starting with ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.190734 4728 scope.go:117] "RemoveContainer" containerID="66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.191011 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d"} err="failed to get container status \"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\": rpc error: code = NotFound desc = could not find container \"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\": container with ID starting with 66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.191076 4728 scope.go:117] "RemoveContainer" containerID="9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.191291 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d"} err="failed to get container status \"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\": rpc error: code = NotFound desc = could not find container \"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\": container with ID starting with 9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.191313 4728 scope.go:117] "RemoveContainer" containerID="02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.191625 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b"} err="failed to get container status \"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\": rpc error: code = NotFound desc = could not find container \"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\": container with ID starting with 02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.191661 4728 scope.go:117] "RemoveContainer" containerID="c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.192005 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022"} err="failed to get container status \"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\": rpc error: code = NotFound desc = could not find container \"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\": container with ID starting with c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.192028 4728 scope.go:117] "RemoveContainer" containerID="51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.192298 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f"} err="failed to get container status \"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\": rpc error: code = NotFound desc = could not find container \"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\": container with ID starting with 51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.192336 4728 scope.go:117] "RemoveContainer" containerID="d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.192559 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f"} err="failed to get container status \"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\": rpc error: code = NotFound desc = could not find container \"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\": container with ID starting with d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.192578 4728 scope.go:117] "RemoveContainer" containerID="b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.193048 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828"} err="failed to get container status \"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828\": rpc error: code = NotFound desc = could not find container \"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828\": container with ID starting with b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.193067 4728 scope.go:117] "RemoveContainer" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.193334 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3"} err="failed to get container status \"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\": rpc error: code = NotFound desc = could not find container \"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\": container with ID starting with 4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.193352 4728 scope.go:117] "RemoveContainer" containerID="15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.193599 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931"} err="failed to get container status \"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\": rpc error: code = NotFound desc = could not find container \"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\": container with ID starting with 15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.193619 4728 scope.go:117] "RemoveContainer" containerID="ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.194165 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb"} err="failed to get container status \"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\": rpc error: code = NotFound desc = could not find container \"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\": container with ID starting with ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.194190 4728 scope.go:117] "RemoveContainer" containerID="66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.194518 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d"} err="failed to get container status \"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\": rpc error: code = NotFound desc = could not find container \"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\": container with ID starting with 66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.194605 4728 scope.go:117] "RemoveContainer" containerID="9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.194987 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d"} err="failed to get container status \"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\": rpc error: code = NotFound desc = could not find container \"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\": container with ID starting with 9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.195016 4728 scope.go:117] "RemoveContainer" containerID="02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.195338 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b"} err="failed to get container status \"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\": rpc error: code = NotFound desc = could not find container \"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\": container with ID starting with 02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.195358 4728 scope.go:117] "RemoveContainer" containerID="c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.195605 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022"} err="failed to get container status \"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\": rpc error: code = NotFound desc = could not find container \"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\": container with ID starting with c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.195626 4728 scope.go:117] "RemoveContainer" containerID="51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.195995 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f"} err="failed to get container status \"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\": rpc error: code = NotFound desc = could not find container \"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\": container with ID starting with 51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.196019 4728 scope.go:117] "RemoveContainer" containerID="d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.196379 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f"} err="failed to get container status \"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\": rpc error: code = NotFound desc = could not find container \"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\": container with ID starting with d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.196401 4728 scope.go:117] "RemoveContainer" containerID="b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.196631 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828"} err="failed to get container status \"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828\": rpc error: code = NotFound desc = could not find container \"b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828\": container with ID starting with b892b1609722d534cb11474eaab078d444c0b602f9034feeb89b4b304e149828 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.196652 4728 scope.go:117] "RemoveContainer" containerID="4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.196888 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3"} err="failed to get container status \"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\": rpc error: code = NotFound desc = could not find container \"4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3\": container with ID starting with 4899c35edf792b6e3d284613bc3af3d6ebcd2f68aa00bf8cd8c881a4f2e8b2f3 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.196907 4728 scope.go:117] "RemoveContainer" containerID="15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.197278 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931"} err="failed to get container status \"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\": rpc error: code = NotFound desc = could not find container \"15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931\": container with ID starting with 15d54b7ac89ec751dcd17cceed9d408d22798c4fed06448c823b1696f94c9931 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.197331 4728 scope.go:117] "RemoveContainer" containerID="ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.197642 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb"} err="failed to get container status \"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\": rpc error: code = NotFound desc = could not find container \"ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb\": container with ID starting with ea0c494d978667f049c58773d9c9441e8d9647da21ef6d25dac2a89b898156bb not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.197677 4728 scope.go:117] "RemoveContainer" containerID="66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.197906 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d"} err="failed to get container status \"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\": rpc error: code = NotFound desc = could not find container \"66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d\": container with ID starting with 66398f3b8d66f6ef45324a06bfb16f794e70d05dcf6b0c66af5d03dbc3dc2e8d not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.197929 4728 scope.go:117] "RemoveContainer" containerID="9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.198170 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d"} err="failed to get container status \"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\": rpc error: code = NotFound desc = could not find container \"9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d\": container with ID starting with 9897c22ae5cf38be007fe308450367c5118a79a00697ffb1b1f542984afa5d7d not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.198192 4728 scope.go:117] "RemoveContainer" containerID="02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.198351 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b"} err="failed to get container status \"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\": rpc error: code = NotFound desc = could not find container \"02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b\": container with ID starting with 02b5c35b58cd7d49d3b8532d0c8a042505621a638dee85a30d00d78e7c83f18b not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.198372 4728 scope.go:117] "RemoveContainer" containerID="c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.198557 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022"} err="failed to get container status \"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\": rpc error: code = NotFound desc = could not find container \"c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022\": container with ID starting with c5af0f57cc995d15f74ce0968ecaeb737aa934bed9ffa5904ba2ecd7c2c59022 not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.198609 4728 scope.go:117] "RemoveContainer" containerID="51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.198861 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f"} err="failed to get container status \"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\": rpc error: code = NotFound desc = could not find container \"51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f\": container with ID starting with 51a0aaf38ace79d14e36775c8a8048aa3b1884db26a841a6e0d6ffdef829b58f not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.198889 4728 scope.go:117] "RemoveContainer" containerID="d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.199120 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f"} err="failed to get container status \"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\": rpc error: code = NotFound desc = could not find container \"d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f\": container with ID starting with d77113b6b7108a995f781f2e49286feffccfeb5ea373421cf426c37762e8625f not found: ID does not exist" Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.973303 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" event={"ID":"d98610cd-d83d-4bf1-8f95-7975a729b776","Type":"ContainerStarted","Data":"24a852aef39a1c3453d71d3daa510fcec83f96d57d03f86f60906bfd2b1560a4"} Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.973356 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" event={"ID":"d98610cd-d83d-4bf1-8f95-7975a729b776","Type":"ContainerStarted","Data":"a6c89880cbf5985b53bc7d7545dc9ca4bab0f29a68e9810e8ad2e28b4382dfe1"} Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.973374 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" event={"ID":"d98610cd-d83d-4bf1-8f95-7975a729b776","Type":"ContainerStarted","Data":"30c7fd3bf09979af37407fd54dbcc4021a675db59cd54c202200629217e39a3c"} Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.973388 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" event={"ID":"d98610cd-d83d-4bf1-8f95-7975a729b776","Type":"ContainerStarted","Data":"5bf3f7b44181971e781af81fe9fa26e801514c1f1463872ead0996d18180691c"} Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.973400 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" event={"ID":"d98610cd-d83d-4bf1-8f95-7975a729b776","Type":"ContainerStarted","Data":"56f45e5666b2383a97e4da0aa5c5ff69fe948821f83a18359f4b54fd609c71d9"} Feb 04 11:39:40 crc kubenswrapper[4728]: I0204 11:39:40.973412 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" event={"ID":"d98610cd-d83d-4bf1-8f95-7975a729b776","Type":"ContainerStarted","Data":"303c7d8b3e706721f30df9cb5aee155bc03a4c12c5716dfc0b57e0f086d202c8"} Feb 04 11:39:41 crc kubenswrapper[4728]: I0204 11:39:41.574265 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e963298-5c99-4db8-bdba-88187d4b0018" path="/var/lib/kubelet/pods/0e963298-5c99-4db8-bdba-88187d4b0018/volumes" Feb 04 11:39:42 crc kubenswrapper[4728]: I0204 11:39:42.989538 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" event={"ID":"d98610cd-d83d-4bf1-8f95-7975a729b776","Type":"ContainerStarted","Data":"bdaa640d9d5c18d9cbeb2a523f8c94c7eb24f7467debdf4fd2a868d05935bff2"} Feb 04 11:39:46 crc kubenswrapper[4728]: I0204 11:39:46.009775 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" event={"ID":"d98610cd-d83d-4bf1-8f95-7975a729b776","Type":"ContainerStarted","Data":"4afdd43678c0d754ef561422400223b298203af5e5522967ac32b1ef5b3c585b"} Feb 04 11:39:46 crc kubenswrapper[4728]: I0204 11:39:46.010199 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:46 crc kubenswrapper[4728]: I0204 11:39:46.010215 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:46 crc kubenswrapper[4728]: I0204 11:39:46.010224 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:46 crc kubenswrapper[4728]: I0204 11:39:46.034709 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:46 crc kubenswrapper[4728]: I0204 11:39:46.037137 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:39:46 crc kubenswrapper[4728]: I0204 11:39:46.046692 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" podStartSLOduration=7.046678736 podStartE2EDuration="7.046678736s" podCreationTimestamp="2026-02-04 11:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:39:46.043645301 +0000 UTC m=+735.186349686" watchObservedRunningTime="2026-02-04 11:39:46.046678736 +0000 UTC m=+735.189383121" Feb 04 11:39:52 crc kubenswrapper[4728]: I0204 11:39:52.553599 4728 scope.go:117] "RemoveContainer" containerID="c5303ece67988b48c9f7078f4f5f783e2dfa7759b80454d2a50b80d956debf57" Feb 04 11:39:53 crc kubenswrapper[4728]: I0204 11:39:53.051963 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dc6rd_3dbc56be-abfc-4180-870e-f4c19bd09f4b/kube-multus/2.log" Feb 04 11:39:53 crc kubenswrapper[4728]: I0204 11:39:53.052287 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dc6rd" event={"ID":"3dbc56be-abfc-4180-870e-f4c19bd09f4b","Type":"ContainerStarted","Data":"c57d240b2b8a418909a97d3cb7248300e1ed1f5b498896e118a3a7dfcd7e726c"} Feb 04 11:40:05 crc kubenswrapper[4728]: I0204 11:40:05.448863 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:40:05 crc kubenswrapper[4728]: I0204 11:40:05.449552 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:40:09 crc kubenswrapper[4728]: I0204 11:40:09.487119 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-stt9j" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.314379 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn"] Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.315933 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.318085 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.327429 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn"] Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.393925 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-util\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.394106 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-bundle\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.394384 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5qt5\" (UniqueName: \"kubernetes.io/projected/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-kube-api-access-q5qt5\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.495289 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-util\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.495347 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-bundle\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.495402 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5qt5\" (UniqueName: \"kubernetes.io/projected/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-kube-api-access-q5qt5\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.496204 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-util\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.496216 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-bundle\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.517568 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5qt5\" (UniqueName: \"kubernetes.io/projected/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-kube-api-access-q5qt5\") pod \"36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.632139 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:12 crc kubenswrapper[4728]: I0204 11:40:12.782534 4728 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 04 11:40:13 crc kubenswrapper[4728]: I0204 11:40:13.035376 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn"] Feb 04 11:40:13 crc kubenswrapper[4728]: I0204 11:40:13.175922 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" event={"ID":"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa","Type":"ContainerStarted","Data":"e1cb1cf1cdbeb1f632cd8963ac7d87829adf2976cb9be913f08b94d584e945c7"} Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.180983 4728 generic.go:334] "Generic (PLEG): container finished" podID="3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" containerID="70ba500893fb4109eff7900ffa69af2fe67ba536d5cdb3c125e4b6f00d64b553" exitCode=0 Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.181028 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" event={"ID":"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa","Type":"ContainerDied","Data":"70ba500893fb4109eff7900ffa69af2fe67ba536d5cdb3c125e4b6f00d64b553"} Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.664362 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8c8c5"] Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.666095 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.674808 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8c8c5"] Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.721659 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-utilities\") pod \"redhat-operators-8c8c5\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.721795 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnld8\" (UniqueName: \"kubernetes.io/projected/8d9fcf0f-5658-4803-8c51-856c7ee9673b-kube-api-access-nnld8\") pod \"redhat-operators-8c8c5\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.721856 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-catalog-content\") pod \"redhat-operators-8c8c5\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.823382 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-utilities\") pod \"redhat-operators-8c8c5\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.823457 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnld8\" (UniqueName: \"kubernetes.io/projected/8d9fcf0f-5658-4803-8c51-856c7ee9673b-kube-api-access-nnld8\") pod \"redhat-operators-8c8c5\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.823493 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-catalog-content\") pod \"redhat-operators-8c8c5\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.824277 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-catalog-content\") pod \"redhat-operators-8c8c5\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.824294 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-utilities\") pod \"redhat-operators-8c8c5\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.843349 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnld8\" (UniqueName: \"kubernetes.io/projected/8d9fcf0f-5658-4803-8c51-856c7ee9673b-kube-api-access-nnld8\") pod \"redhat-operators-8c8c5\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:14 crc kubenswrapper[4728]: I0204 11:40:14.986158 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:15 crc kubenswrapper[4728]: I0204 11:40:15.262484 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8c8c5"] Feb 04 11:40:15 crc kubenswrapper[4728]: W0204 11:40:15.266282 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d9fcf0f_5658_4803_8c51_856c7ee9673b.slice/crio-1e41b820d4f66f6a1c77f17f4b9e5b76bca12953e9481164d1b6dfd9243e8a10 WatchSource:0}: Error finding container 1e41b820d4f66f6a1c77f17f4b9e5b76bca12953e9481164d1b6dfd9243e8a10: Status 404 returned error can't find the container with id 1e41b820d4f66f6a1c77f17f4b9e5b76bca12953e9481164d1b6dfd9243e8a10 Feb 04 11:40:16 crc kubenswrapper[4728]: I0204 11:40:16.199428 4728 generic.go:334] "Generic (PLEG): container finished" podID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" containerID="a07df10b2a280c299ee96c00d17faf9235288fb42c3743dd8c9d3d94af991585" exitCode=0 Feb 04 11:40:16 crc kubenswrapper[4728]: I0204 11:40:16.199557 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c8c5" event={"ID":"8d9fcf0f-5658-4803-8c51-856c7ee9673b","Type":"ContainerDied","Data":"a07df10b2a280c299ee96c00d17faf9235288fb42c3743dd8c9d3d94af991585"} Feb 04 11:40:16 crc kubenswrapper[4728]: I0204 11:40:16.199708 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c8c5" event={"ID":"8d9fcf0f-5658-4803-8c51-856c7ee9673b","Type":"ContainerStarted","Data":"1e41b820d4f66f6a1c77f17f4b9e5b76bca12953e9481164d1b6dfd9243e8a10"} Feb 04 11:40:16 crc kubenswrapper[4728]: I0204 11:40:16.203567 4728 generic.go:334] "Generic (PLEG): container finished" podID="3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" containerID="7158424f380208c6bc25f0db7b39ead37c056929b9c8d8e50c80c04b10d215d8" exitCode=0 Feb 04 11:40:16 crc kubenswrapper[4728]: I0204 11:40:16.203612 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" event={"ID":"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa","Type":"ContainerDied","Data":"7158424f380208c6bc25f0db7b39ead37c056929b9c8d8e50c80c04b10d215d8"} Feb 04 11:40:17 crc kubenswrapper[4728]: I0204 11:40:17.216846 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c8c5" event={"ID":"8d9fcf0f-5658-4803-8c51-856c7ee9673b","Type":"ContainerStarted","Data":"7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e"} Feb 04 11:40:17 crc kubenswrapper[4728]: I0204 11:40:17.219943 4728 generic.go:334] "Generic (PLEG): container finished" podID="3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" containerID="ef519903c4829c43bef462747e93558cb73f93a8c3cc53a9bfd4e64351c32617" exitCode=0 Feb 04 11:40:17 crc kubenswrapper[4728]: I0204 11:40:17.219996 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" event={"ID":"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa","Type":"ContainerDied","Data":"ef519903c4829c43bef462747e93558cb73f93a8c3cc53a9bfd4e64351c32617"} Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.231320 4728 generic.go:334] "Generic (PLEG): container finished" podID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" containerID="7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e" exitCode=0 Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.231445 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c8c5" event={"ID":"8d9fcf0f-5658-4803-8c51-856c7ee9673b","Type":"ContainerDied","Data":"7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e"} Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.529462 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.570843 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5qt5\" (UniqueName: \"kubernetes.io/projected/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-kube-api-access-q5qt5\") pod \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.571178 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-util\") pod \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.571204 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-bundle\") pod \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\" (UID: \"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa\") " Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.571837 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-bundle" (OuterVolumeSpecName: "bundle") pod "3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" (UID: "3f46d5c6-6a3b-45e4-8be8-c23e04c24afa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.579645 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-kube-api-access-q5qt5" (OuterVolumeSpecName: "kube-api-access-q5qt5") pod "3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" (UID: "3f46d5c6-6a3b-45e4-8be8-c23e04c24afa"). InnerVolumeSpecName "kube-api-access-q5qt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.672615 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5qt5\" (UniqueName: \"kubernetes.io/projected/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-kube-api-access-q5qt5\") on node \"crc\" DevicePath \"\"" Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.672670 4728 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.736403 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-util" (OuterVolumeSpecName: "util") pod "3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" (UID: "3f46d5c6-6a3b-45e4-8be8-c23e04c24afa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:40:18 crc kubenswrapper[4728]: I0204 11:40:18.773721 4728 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f46d5c6-6a3b-45e4-8be8-c23e04c24afa-util\") on node \"crc\" DevicePath \"\"" Feb 04 11:40:19 crc kubenswrapper[4728]: I0204 11:40:19.237776 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" Feb 04 11:40:19 crc kubenswrapper[4728]: I0204 11:40:19.237746 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn" event={"ID":"3f46d5c6-6a3b-45e4-8be8-c23e04c24afa","Type":"ContainerDied","Data":"e1cb1cf1cdbeb1f632cd8963ac7d87829adf2976cb9be913f08b94d584e945c7"} Feb 04 11:40:19 crc kubenswrapper[4728]: I0204 11:40:19.237886 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1cb1cf1cdbeb1f632cd8963ac7d87829adf2976cb9be913f08b94d584e945c7" Feb 04 11:40:19 crc kubenswrapper[4728]: I0204 11:40:19.242663 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c8c5" event={"ID":"8d9fcf0f-5658-4803-8c51-856c7ee9673b","Type":"ContainerStarted","Data":"f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0"} Feb 04 11:40:19 crc kubenswrapper[4728]: I0204 11:40:19.262565 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8c8c5" podStartSLOduration=2.7024080870000002 podStartE2EDuration="5.262546735s" podCreationTimestamp="2026-02-04 11:40:14 +0000 UTC" firstStartedPulling="2026-02-04 11:40:16.20138513 +0000 UTC m=+765.344089525" lastFinishedPulling="2026-02-04 11:40:18.761523788 +0000 UTC m=+767.904228173" observedRunningTime="2026-02-04 11:40:19.259091259 +0000 UTC m=+768.401795654" watchObservedRunningTime="2026-02-04 11:40:19.262546735 +0000 UTC m=+768.405251110" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.599476 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-57bf49857b-27rqq"] Feb 04 11:40:22 crc kubenswrapper[4728]: E0204 11:40:22.600214 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" containerName="pull" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.600230 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" containerName="pull" Feb 04 11:40:22 crc kubenswrapper[4728]: E0204 11:40:22.600246 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" containerName="util" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.600262 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" containerName="util" Feb 04 11:40:22 crc kubenswrapper[4728]: E0204 11:40:22.600277 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" containerName="extract" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.600286 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" containerName="extract" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.600534 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f46d5c6-6a3b-45e4-8be8-c23e04c24afa" containerName="extract" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.603890 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-57bf49857b-27rqq"] Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.604051 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-57bf49857b-27rqq" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.607657 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.607716 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-wkncl" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.609379 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.626980 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jn5k\" (UniqueName: \"kubernetes.io/projected/67a8e926-007f-41c2-aace-07706b07e072-kube-api-access-7jn5k\") pod \"nmstate-operator-57bf49857b-27rqq\" (UID: \"67a8e926-007f-41c2-aace-07706b07e072\") " pod="openshift-nmstate/nmstate-operator-57bf49857b-27rqq" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.728285 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jn5k\" (UniqueName: \"kubernetes.io/projected/67a8e926-007f-41c2-aace-07706b07e072-kube-api-access-7jn5k\") pod \"nmstate-operator-57bf49857b-27rqq\" (UID: \"67a8e926-007f-41c2-aace-07706b07e072\") " pod="openshift-nmstate/nmstate-operator-57bf49857b-27rqq" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.745333 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jn5k\" (UniqueName: \"kubernetes.io/projected/67a8e926-007f-41c2-aace-07706b07e072-kube-api-access-7jn5k\") pod \"nmstate-operator-57bf49857b-27rqq\" (UID: \"67a8e926-007f-41c2-aace-07706b07e072\") " pod="openshift-nmstate/nmstate-operator-57bf49857b-27rqq" Feb 04 11:40:22 crc kubenswrapper[4728]: I0204 11:40:22.931008 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-57bf49857b-27rqq" Feb 04 11:40:23 crc kubenswrapper[4728]: I0204 11:40:23.158526 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-57bf49857b-27rqq"] Feb 04 11:40:23 crc kubenswrapper[4728]: W0204 11:40:23.167900 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67a8e926_007f_41c2_aace_07706b07e072.slice/crio-01a9e221f7e5adb6ca7663de5c4908ae80689396a86fcc91e257c0cb7788f2df WatchSource:0}: Error finding container 01a9e221f7e5adb6ca7663de5c4908ae80689396a86fcc91e257c0cb7788f2df: Status 404 returned error can't find the container with id 01a9e221f7e5adb6ca7663de5c4908ae80689396a86fcc91e257c0cb7788f2df Feb 04 11:40:23 crc kubenswrapper[4728]: I0204 11:40:23.268982 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-57bf49857b-27rqq" event={"ID":"67a8e926-007f-41c2-aace-07706b07e072","Type":"ContainerStarted","Data":"01a9e221f7e5adb6ca7663de5c4908ae80689396a86fcc91e257c0cb7788f2df"} Feb 04 11:40:24 crc kubenswrapper[4728]: I0204 11:40:24.987133 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:24 crc kubenswrapper[4728]: I0204 11:40:24.987203 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:25 crc kubenswrapper[4728]: I0204 11:40:25.028286 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:25 crc kubenswrapper[4728]: I0204 11:40:25.324171 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.413900 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vcm46"] Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.415629 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.444615 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcm46"] Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.484534 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqtqv\" (UniqueName: \"kubernetes.io/projected/f289245d-75bb-4d65-ada8-8657551179f4-kube-api-access-qqtqv\") pod \"certified-operators-vcm46\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.484618 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-utilities\") pod \"certified-operators-vcm46\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.484649 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-catalog-content\") pod \"certified-operators-vcm46\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.585345 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqtqv\" (UniqueName: \"kubernetes.io/projected/f289245d-75bb-4d65-ada8-8657551179f4-kube-api-access-qqtqv\") pod \"certified-operators-vcm46\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.585405 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-utilities\") pod \"certified-operators-vcm46\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.585428 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-catalog-content\") pod \"certified-operators-vcm46\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.585898 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-catalog-content\") pod \"certified-operators-vcm46\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.585959 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-utilities\") pod \"certified-operators-vcm46\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.611691 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqtqv\" (UniqueName: \"kubernetes.io/projected/f289245d-75bb-4d65-ada8-8657551179f4-kube-api-access-qqtqv\") pod \"certified-operators-vcm46\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:26 crc kubenswrapper[4728]: I0204 11:40:26.760843 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:27 crc kubenswrapper[4728]: I0204 11:40:27.005432 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcm46"] Feb 04 11:40:27 crc kubenswrapper[4728]: W0204 11:40:27.027155 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf289245d_75bb_4d65_ada8_8657551179f4.slice/crio-9cb93e56c98da4cdcc45a8ad1eb2ed1cab218d93f05ddca7f979fcad6dfc7c2f WatchSource:0}: Error finding container 9cb93e56c98da4cdcc45a8ad1eb2ed1cab218d93f05ddca7f979fcad6dfc7c2f: Status 404 returned error can't find the container with id 9cb93e56c98da4cdcc45a8ad1eb2ed1cab218d93f05ddca7f979fcad6dfc7c2f Feb 04 11:40:27 crc kubenswrapper[4728]: I0204 11:40:27.421697 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcm46" event={"ID":"f289245d-75bb-4d65-ada8-8657551179f4","Type":"ContainerStarted","Data":"83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9"} Feb 04 11:40:27 crc kubenswrapper[4728]: I0204 11:40:27.422122 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcm46" event={"ID":"f289245d-75bb-4d65-ada8-8657551179f4","Type":"ContainerStarted","Data":"9cb93e56c98da4cdcc45a8ad1eb2ed1cab218d93f05ddca7f979fcad6dfc7c2f"} Feb 04 11:40:27 crc kubenswrapper[4728]: I0204 11:40:27.423294 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-57bf49857b-27rqq" event={"ID":"67a8e926-007f-41c2-aace-07706b07e072","Type":"ContainerStarted","Data":"b0938a0f97223fbcb1b433d862eeb8b080f6dbf393646b472de8aa58bd0c34d1"} Feb 04 11:40:27 crc kubenswrapper[4728]: I0204 11:40:27.460883 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-57bf49857b-27rqq" podStartSLOduration=3.15754012 podStartE2EDuration="5.46086852s" podCreationTimestamp="2026-02-04 11:40:22 +0000 UTC" firstStartedPulling="2026-02-04 11:40:23.173016063 +0000 UTC m=+772.315720448" lastFinishedPulling="2026-02-04 11:40:25.476344463 +0000 UTC m=+774.619048848" observedRunningTime="2026-02-04 11:40:27.458853039 +0000 UTC m=+776.601557424" watchObservedRunningTime="2026-02-04 11:40:27.46086852 +0000 UTC m=+776.603572905" Feb 04 11:40:28 crc kubenswrapper[4728]: I0204 11:40:28.429306 4728 generic.go:334] "Generic (PLEG): container finished" podID="f289245d-75bb-4d65-ada8-8657551179f4" containerID="83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9" exitCode=0 Feb 04 11:40:28 crc kubenswrapper[4728]: I0204 11:40:28.429371 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcm46" event={"ID":"f289245d-75bb-4d65-ada8-8657551179f4","Type":"ContainerDied","Data":"83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9"} Feb 04 11:40:28 crc kubenswrapper[4728]: I0204 11:40:28.648066 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8c8c5"] Feb 04 11:40:28 crc kubenswrapper[4728]: I0204 11:40:28.648593 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8c8c5" podUID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" containerName="registry-server" containerID="cri-o://f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0" gracePeriod=2 Feb 04 11:40:28 crc kubenswrapper[4728]: I0204 11:40:28.962224 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.017604 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnld8\" (UniqueName: \"kubernetes.io/projected/8d9fcf0f-5658-4803-8c51-856c7ee9673b-kube-api-access-nnld8\") pod \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.017772 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-utilities\") pod \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.017803 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-catalog-content\") pod \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\" (UID: \"8d9fcf0f-5658-4803-8c51-856c7ee9673b\") " Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.018553 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-utilities" (OuterVolumeSpecName: "utilities") pod "8d9fcf0f-5658-4803-8c51-856c7ee9673b" (UID: "8d9fcf0f-5658-4803-8c51-856c7ee9673b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.025561 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9fcf0f-5658-4803-8c51-856c7ee9673b-kube-api-access-nnld8" (OuterVolumeSpecName: "kube-api-access-nnld8") pod "8d9fcf0f-5658-4803-8c51-856c7ee9673b" (UID: "8d9fcf0f-5658-4803-8c51-856c7ee9673b"). InnerVolumeSpecName "kube-api-access-nnld8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.119080 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnld8\" (UniqueName: \"kubernetes.io/projected/8d9fcf0f-5658-4803-8c51-856c7ee9673b-kube-api-access-nnld8\") on node \"crc\" DevicePath \"\"" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.119117 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.139867 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d9fcf0f-5658-4803-8c51-856c7ee9673b" (UID: "8d9fcf0f-5658-4803-8c51-856c7ee9673b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.219840 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d9fcf0f-5658-4803-8c51-856c7ee9673b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.443577 4728 generic.go:334] "Generic (PLEG): container finished" podID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" containerID="f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0" exitCode=0 Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.443715 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c8c5" event={"ID":"8d9fcf0f-5658-4803-8c51-856c7ee9673b","Type":"ContainerDied","Data":"f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0"} Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.443936 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c8c5" event={"ID":"8d9fcf0f-5658-4803-8c51-856c7ee9673b","Type":"ContainerDied","Data":"1e41b820d4f66f6a1c77f17f4b9e5b76bca12953e9481164d1b6dfd9243e8a10"} Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.443736 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c8c5" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.443956 4728 scope.go:117] "RemoveContainer" containerID="f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.463348 4728 scope.go:117] "RemoveContainer" containerID="7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.487814 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8c8c5"] Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.493450 4728 scope.go:117] "RemoveContainer" containerID="a07df10b2a280c299ee96c00d17faf9235288fb42c3743dd8c9d3d94af991585" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.495113 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8c8c5"] Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.512552 4728 scope.go:117] "RemoveContainer" containerID="f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0" Feb 04 11:40:29 crc kubenswrapper[4728]: E0204 11:40:29.513062 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0\": container with ID starting with f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0 not found: ID does not exist" containerID="f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.513094 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0"} err="failed to get container status \"f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0\": rpc error: code = NotFound desc = could not find container \"f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0\": container with ID starting with f05dd3d93b8d8d4e5ff0c88b19993bf2237892bba70bd5e4d9a1f823c577a8d0 not found: ID does not exist" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.513117 4728 scope.go:117] "RemoveContainer" containerID="7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e" Feb 04 11:40:29 crc kubenswrapper[4728]: E0204 11:40:29.513437 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e\": container with ID starting with 7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e not found: ID does not exist" containerID="7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.513476 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e"} err="failed to get container status \"7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e\": rpc error: code = NotFound desc = could not find container \"7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e\": container with ID starting with 7757fe6aff9a711115dd825ae6666fe666490e8cfe5eba9ea9b8a1b099473e9e not found: ID does not exist" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.513500 4728 scope.go:117] "RemoveContainer" containerID="a07df10b2a280c299ee96c00d17faf9235288fb42c3743dd8c9d3d94af991585" Feb 04 11:40:29 crc kubenswrapper[4728]: E0204 11:40:29.513859 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07df10b2a280c299ee96c00d17faf9235288fb42c3743dd8c9d3d94af991585\": container with ID starting with a07df10b2a280c299ee96c00d17faf9235288fb42c3743dd8c9d3d94af991585 not found: ID does not exist" containerID="a07df10b2a280c299ee96c00d17faf9235288fb42c3743dd8c9d3d94af991585" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.513888 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07df10b2a280c299ee96c00d17faf9235288fb42c3743dd8c9d3d94af991585"} err="failed to get container status \"a07df10b2a280c299ee96c00d17faf9235288fb42c3743dd8c9d3d94af991585\": rpc error: code = NotFound desc = could not find container \"a07df10b2a280c299ee96c00d17faf9235288fb42c3743dd8c9d3d94af991585\": container with ID starting with a07df10b2a280c299ee96c00d17faf9235288fb42c3743dd8c9d3d94af991585 not found: ID does not exist" Feb 04 11:40:29 crc kubenswrapper[4728]: I0204 11:40:29.561331 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" path="/var/lib/kubelet/pods/8d9fcf0f-5658-4803-8c51-856c7ee9673b/volumes" Feb 04 11:40:30 crc kubenswrapper[4728]: I0204 11:40:30.455587 4728 generic.go:334] "Generic (PLEG): container finished" podID="f289245d-75bb-4d65-ada8-8657551179f4" containerID="3dfc66d30ed0bc1e47ed7f0b807b3fa86d2ebb26d890115c589aa6b372b05501" exitCode=0 Feb 04 11:40:30 crc kubenswrapper[4728]: I0204 11:40:30.455628 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcm46" event={"ID":"f289245d-75bb-4d65-ada8-8657551179f4","Type":"ContainerDied","Data":"3dfc66d30ed0bc1e47ed7f0b807b3fa86d2ebb26d890115c589aa6b372b05501"} Feb 04 11:40:31 crc kubenswrapper[4728]: I0204 11:40:31.464929 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcm46" event={"ID":"f289245d-75bb-4d65-ada8-8657551179f4","Type":"ContainerStarted","Data":"75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5"} Feb 04 11:40:31 crc kubenswrapper[4728]: I0204 11:40:31.482606 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vcm46" podStartSLOduration=4.090382682 podStartE2EDuration="6.482590289s" podCreationTimestamp="2026-02-04 11:40:25 +0000 UTC" firstStartedPulling="2026-02-04 11:40:28.43128085 +0000 UTC m=+777.573985235" lastFinishedPulling="2026-02-04 11:40:30.823488467 +0000 UTC m=+779.966192842" observedRunningTime="2026-02-04 11:40:31.482066157 +0000 UTC m=+780.624770542" watchObservedRunningTime="2026-02-04 11:40:31.482590289 +0000 UTC m=+780.625294674" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.447483 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-677949fd65-qpw7t"] Feb 04 11:40:32 crc kubenswrapper[4728]: E0204 11:40:32.448080 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" containerName="registry-server" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.448096 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" containerName="registry-server" Feb 04 11:40:32 crc kubenswrapper[4728]: E0204 11:40:32.448113 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" containerName="extract-content" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.448121 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" containerName="extract-content" Feb 04 11:40:32 crc kubenswrapper[4728]: E0204 11:40:32.448151 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" containerName="extract-utilities" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.448162 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" containerName="extract-utilities" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.448316 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9fcf0f-5658-4803-8c51-856c7ee9673b" containerName="registry-server" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.449166 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-677949fd65-qpw7t" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.450329 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh2fv\" (UniqueName: \"kubernetes.io/projected/4bd9f410-cd23-48ce-b65b-04573f621b0c-kube-api-access-fh2fv\") pod \"nmstate-metrics-677949fd65-qpw7t\" (UID: \"4bd9f410-cd23-48ce-b65b-04573f621b0c\") " pod="openshift-nmstate/nmstate-metrics-677949fd65-qpw7t" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.451151 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-78488" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.456373 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-bd5678b45-jppz6"] Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.457097 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.459458 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.467975 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-677949fd65-qpw7t"] Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.473265 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-bd5678b45-jppz6"] Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.499621 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-l4f7z"] Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.500266 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.551301 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slbmt\" (UniqueName: \"kubernetes.io/projected/860f14fe-36e8-42e9-be2a-26ab378c6436-kube-api-access-slbmt\") pod \"nmstate-webhook-bd5678b45-jppz6\" (UID: \"860f14fe-36e8-42e9-be2a-26ab378c6436\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.551346 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2ae14687-d092-4364-b8d2-f97b412741f0-dbus-socket\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.551369 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2ae14687-d092-4364-b8d2-f97b412741f0-nmstate-lock\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.551384 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2ae14687-d092-4364-b8d2-f97b412741f0-ovs-socket\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.551441 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/860f14fe-36e8-42e9-be2a-26ab378c6436-tls-key-pair\") pod \"nmstate-webhook-bd5678b45-jppz6\" (UID: \"860f14fe-36e8-42e9-be2a-26ab378c6436\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.551457 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qvwc\" (UniqueName: \"kubernetes.io/projected/2ae14687-d092-4364-b8d2-f97b412741f0-kube-api-access-2qvwc\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.551512 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh2fv\" (UniqueName: \"kubernetes.io/projected/4bd9f410-cd23-48ce-b65b-04573f621b0c-kube-api-access-fh2fv\") pod \"nmstate-metrics-677949fd65-qpw7t\" (UID: \"4bd9f410-cd23-48ce-b65b-04573f621b0c\") " pod="openshift-nmstate/nmstate-metrics-677949fd65-qpw7t" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.572888 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v"] Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.573720 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.576112 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-kmrnk" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.576542 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.578135 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.580695 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v"] Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.611178 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh2fv\" (UniqueName: \"kubernetes.io/projected/4bd9f410-cd23-48ce-b65b-04573f621b0c-kube-api-access-fh2fv\") pod \"nmstate-metrics-677949fd65-qpw7t\" (UID: \"4bd9f410-cd23-48ce-b65b-04573f621b0c\") " pod="openshift-nmstate/nmstate-metrics-677949fd65-qpw7t" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652391 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slbmt\" (UniqueName: \"kubernetes.io/projected/860f14fe-36e8-42e9-be2a-26ab378c6436-kube-api-access-slbmt\") pod \"nmstate-webhook-bd5678b45-jppz6\" (UID: \"860f14fe-36e8-42e9-be2a-26ab378c6436\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652434 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59qm8\" (UniqueName: \"kubernetes.io/projected/a9d1039a-1431-4b01-ac8a-173aba063825-kube-api-access-59qm8\") pod \"nmstate-console-plugin-6f874f9768-trc2v\" (UID: \"a9d1039a-1431-4b01-ac8a-173aba063825\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652457 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2ae14687-d092-4364-b8d2-f97b412741f0-dbus-socket\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652475 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2ae14687-d092-4364-b8d2-f97b412741f0-nmstate-lock\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652493 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2ae14687-d092-4364-b8d2-f97b412741f0-ovs-socket\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652516 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/860f14fe-36e8-42e9-be2a-26ab378c6436-tls-key-pair\") pod \"nmstate-webhook-bd5678b45-jppz6\" (UID: \"860f14fe-36e8-42e9-be2a-26ab378c6436\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652534 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qvwc\" (UniqueName: \"kubernetes.io/projected/2ae14687-d092-4364-b8d2-f97b412741f0-kube-api-access-2qvwc\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652549 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a9d1039a-1431-4b01-ac8a-173aba063825-nginx-conf\") pod \"nmstate-console-plugin-6f874f9768-trc2v\" (UID: \"a9d1039a-1431-4b01-ac8a-173aba063825\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652584 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d1039a-1431-4b01-ac8a-173aba063825-plugin-serving-cert\") pod \"nmstate-console-plugin-6f874f9768-trc2v\" (UID: \"a9d1039a-1431-4b01-ac8a-173aba063825\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652703 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2ae14687-d092-4364-b8d2-f97b412741f0-nmstate-lock\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652732 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2ae14687-d092-4364-b8d2-f97b412741f0-ovs-socket\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: E0204 11:40:32.652809 4728 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.652837 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2ae14687-d092-4364-b8d2-f97b412741f0-dbus-socket\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: E0204 11:40:32.652850 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/860f14fe-36e8-42e9-be2a-26ab378c6436-tls-key-pair podName:860f14fe-36e8-42e9-be2a-26ab378c6436 nodeName:}" failed. No retries permitted until 2026-02-04 11:40:33.152834541 +0000 UTC m=+782.295538926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/860f14fe-36e8-42e9-be2a-26ab378c6436-tls-key-pair") pod "nmstate-webhook-bd5678b45-jppz6" (UID: "860f14fe-36e8-42e9-be2a-26ab378c6436") : secret "openshift-nmstate-webhook" not found Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.667399 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slbmt\" (UniqueName: \"kubernetes.io/projected/860f14fe-36e8-42e9-be2a-26ab378c6436-kube-api-access-slbmt\") pod \"nmstate-webhook-bd5678b45-jppz6\" (UID: \"860f14fe-36e8-42e9-be2a-26ab378c6436\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.679390 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qvwc\" (UniqueName: \"kubernetes.io/projected/2ae14687-d092-4364-b8d2-f97b412741f0-kube-api-access-2qvwc\") pod \"nmstate-handler-l4f7z\" (UID: \"2ae14687-d092-4364-b8d2-f97b412741f0\") " pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.751767 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69bc58b68d-jqccl"] Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.752436 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.753460 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59qm8\" (UniqueName: \"kubernetes.io/projected/a9d1039a-1431-4b01-ac8a-173aba063825-kube-api-access-59qm8\") pod \"nmstate-console-plugin-6f874f9768-trc2v\" (UID: \"a9d1039a-1431-4b01-ac8a-173aba063825\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.753558 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a9d1039a-1431-4b01-ac8a-173aba063825-nginx-conf\") pod \"nmstate-console-plugin-6f874f9768-trc2v\" (UID: \"a9d1039a-1431-4b01-ac8a-173aba063825\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.753595 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-oauth-serving-cert\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.753617 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-console-config\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.753646 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e8cdfa7-691b-436e-a949-c5fd89632174-console-serving-cert\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.753664 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-service-ca\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.753693 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e8cdfa7-691b-436e-a949-c5fd89632174-console-oauth-config\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.753718 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb62b\" (UniqueName: \"kubernetes.io/projected/9e8cdfa7-691b-436e-a949-c5fd89632174-kube-api-access-cb62b\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.753742 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d1039a-1431-4b01-ac8a-173aba063825-plugin-serving-cert\") pod \"nmstate-console-plugin-6f874f9768-trc2v\" (UID: \"a9d1039a-1431-4b01-ac8a-173aba063825\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.753803 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-trusted-ca-bundle\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: E0204 11:40:32.754211 4728 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 04 11:40:32 crc kubenswrapper[4728]: E0204 11:40:32.754265 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9d1039a-1431-4b01-ac8a-173aba063825-plugin-serving-cert podName:a9d1039a-1431-4b01-ac8a-173aba063825 nodeName:}" failed. No retries permitted until 2026-02-04 11:40:33.254248928 +0000 UTC m=+782.396953313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a9d1039a-1431-4b01-ac8a-173aba063825-plugin-serving-cert") pod "nmstate-console-plugin-6f874f9768-trc2v" (UID: "a9d1039a-1431-4b01-ac8a-173aba063825") : secret "plugin-serving-cert" not found Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.754892 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a9d1039a-1431-4b01-ac8a-173aba063825-nginx-conf\") pod \"nmstate-console-plugin-6f874f9768-trc2v\" (UID: \"a9d1039a-1431-4b01-ac8a-173aba063825\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.772146 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69bc58b68d-jqccl"] Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.772394 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-677949fd65-qpw7t" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.779495 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59qm8\" (UniqueName: \"kubernetes.io/projected/a9d1039a-1431-4b01-ac8a-173aba063825-kube-api-access-59qm8\") pod \"nmstate-console-plugin-6f874f9768-trc2v\" (UID: \"a9d1039a-1431-4b01-ac8a-173aba063825\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.831034 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.854253 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-oauth-serving-cert\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.854284 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-console-config\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.854308 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e8cdfa7-691b-436e-a949-c5fd89632174-console-serving-cert\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.854322 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-service-ca\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.854342 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e8cdfa7-691b-436e-a949-c5fd89632174-console-oauth-config\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.854359 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb62b\" (UniqueName: \"kubernetes.io/projected/9e8cdfa7-691b-436e-a949-c5fd89632174-kube-api-access-cb62b\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.854398 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-trusted-ca-bundle\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: W0204 11:40:32.855191 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ae14687_d092_4364_b8d2_f97b412741f0.slice/crio-35e37c6305b8401a125c613aa412e981c27c721254c264bf104e5a4c3d4ef4fe WatchSource:0}: Error finding container 35e37c6305b8401a125c613aa412e981c27c721254c264bf104e5a4c3d4ef4fe: Status 404 returned error can't find the container with id 35e37c6305b8401a125c613aa412e981c27c721254c264bf104e5a4c3d4ef4fe Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.855544 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-trusted-ca-bundle\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.856652 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-oauth-serving-cert\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.857777 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-service-ca\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.858625 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e8cdfa7-691b-436e-a949-c5fd89632174-console-config\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.860845 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e8cdfa7-691b-436e-a949-c5fd89632174-console-serving-cert\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.862205 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e8cdfa7-691b-436e-a949-c5fd89632174-console-oauth-config\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:32 crc kubenswrapper[4728]: I0204 11:40:32.871805 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb62b\" (UniqueName: \"kubernetes.io/projected/9e8cdfa7-691b-436e-a949-c5fd89632174-kube-api-access-cb62b\") pod \"console-69bc58b68d-jqccl\" (UID: \"9e8cdfa7-691b-436e-a949-c5fd89632174\") " pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.068852 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.157041 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/860f14fe-36e8-42e9-be2a-26ab378c6436-tls-key-pair\") pod \"nmstate-webhook-bd5678b45-jppz6\" (UID: \"860f14fe-36e8-42e9-be2a-26ab378c6436\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.161520 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/860f14fe-36e8-42e9-be2a-26ab378c6436-tls-key-pair\") pod \"nmstate-webhook-bd5678b45-jppz6\" (UID: \"860f14fe-36e8-42e9-be2a-26ab378c6436\") " pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.192269 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-677949fd65-qpw7t"] Feb 04 11:40:33 crc kubenswrapper[4728]: W0204 11:40:33.204343 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bd9f410_cd23_48ce_b65b_04573f621b0c.slice/crio-251331f9a1edab43c681716e432e8aee2fb5d6704f87eb93d98dd2dee7e4d075 WatchSource:0}: Error finding container 251331f9a1edab43c681716e432e8aee2fb5d6704f87eb93d98dd2dee7e4d075: Status 404 returned error can't find the container with id 251331f9a1edab43c681716e432e8aee2fb5d6704f87eb93d98dd2dee7e4d075 Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.257974 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d1039a-1431-4b01-ac8a-173aba063825-plugin-serving-cert\") pod \"nmstate-console-plugin-6f874f9768-trc2v\" (UID: \"a9d1039a-1431-4b01-ac8a-173aba063825\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.262181 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d1039a-1431-4b01-ac8a-173aba063825-plugin-serving-cert\") pod \"nmstate-console-plugin-6f874f9768-trc2v\" (UID: \"a9d1039a-1431-4b01-ac8a-173aba063825\") " pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.319541 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69bc58b68d-jqccl"] Feb 04 11:40:33 crc kubenswrapper[4728]: W0204 11:40:33.327660 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e8cdfa7_691b_436e_a949_c5fd89632174.slice/crio-f71ac0cc00dfaf90c32f8ea10eb91e6e58a7be233f8f6f7f80d3560923d6fdf5 WatchSource:0}: Error finding container f71ac0cc00dfaf90c32f8ea10eb91e6e58a7be233f8f6f7f80d3560923d6fdf5: Status 404 returned error can't find the container with id f71ac0cc00dfaf90c32f8ea10eb91e6e58a7be233f8f6f7f80d3560923d6fdf5 Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.384780 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.476703 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-l4f7z" event={"ID":"2ae14687-d092-4364-b8d2-f97b412741f0","Type":"ContainerStarted","Data":"35e37c6305b8401a125c613aa412e981c27c721254c264bf104e5a4c3d4ef4fe"} Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.478302 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-677949fd65-qpw7t" event={"ID":"4bd9f410-cd23-48ce-b65b-04573f621b0c","Type":"ContainerStarted","Data":"251331f9a1edab43c681716e432e8aee2fb5d6704f87eb93d98dd2dee7e4d075"} Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.479516 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69bc58b68d-jqccl" event={"ID":"9e8cdfa7-691b-436e-a949-c5fd89632174","Type":"ContainerStarted","Data":"f71ac0cc00dfaf90c32f8ea10eb91e6e58a7be233f8f6f7f80d3560923d6fdf5"} Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.516102 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.568975 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-bd5678b45-jppz6"] Feb 04 11:40:33 crc kubenswrapper[4728]: W0204 11:40:33.587492 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod860f14fe_36e8_42e9_be2a_26ab378c6436.slice/crio-afbd4cffe4d4e813150674e9f91b562177aa35aee43948708e2712d5f4efa60f WatchSource:0}: Error finding container afbd4cffe4d4e813150674e9f91b562177aa35aee43948708e2712d5f4efa60f: Status 404 returned error can't find the container with id afbd4cffe4d4e813150674e9f91b562177aa35aee43948708e2712d5f4efa60f Feb 04 11:40:33 crc kubenswrapper[4728]: I0204 11:40:33.933382 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v"] Feb 04 11:40:34 crc kubenswrapper[4728]: I0204 11:40:34.493177 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69bc58b68d-jqccl" event={"ID":"9e8cdfa7-691b-436e-a949-c5fd89632174","Type":"ContainerStarted","Data":"005863678b136b6be0385336873b4b234a6c7a135e7b49f2f9eca4a2b96c5788"} Feb 04 11:40:34 crc kubenswrapper[4728]: I0204 11:40:34.497830 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" event={"ID":"860f14fe-36e8-42e9-be2a-26ab378c6436","Type":"ContainerStarted","Data":"afbd4cffe4d4e813150674e9f91b562177aa35aee43948708e2712d5f4efa60f"} Feb 04 11:40:34 crc kubenswrapper[4728]: I0204 11:40:34.500512 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" event={"ID":"a9d1039a-1431-4b01-ac8a-173aba063825","Type":"ContainerStarted","Data":"d0958ffca7f9663f4dc4dff38ecceabe16c219937fbaf65bd58f7a893ca5d545"} Feb 04 11:40:34 crc kubenswrapper[4728]: I0204 11:40:34.520480 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69bc58b68d-jqccl" podStartSLOduration=2.520449625 podStartE2EDuration="2.520449625s" podCreationTimestamp="2026-02-04 11:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:40:34.516673281 +0000 UTC m=+783.659377666" watchObservedRunningTime="2026-02-04 11:40:34.520449625 +0000 UTC m=+783.663154010" Feb 04 11:40:35 crc kubenswrapper[4728]: I0204 11:40:35.448386 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:40:35 crc kubenswrapper[4728]: I0204 11:40:35.448450 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:40:36 crc kubenswrapper[4728]: I0204 11:40:36.513586 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" event={"ID":"860f14fe-36e8-42e9-be2a-26ab378c6436","Type":"ContainerStarted","Data":"fe31aa7b4e6e311000c1358bdaaf08460eb12703a06f645a11cdfe8903b9b2e6"} Feb 04 11:40:36 crc kubenswrapper[4728]: I0204 11:40:36.513721 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" Feb 04 11:40:36 crc kubenswrapper[4728]: I0204 11:40:36.515228 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-l4f7z" event={"ID":"2ae14687-d092-4364-b8d2-f97b412741f0","Type":"ContainerStarted","Data":"c8bd750c69442672211510eef39129183a2ac1197a8430d884fc68c12c67dcea"} Feb 04 11:40:36 crc kubenswrapper[4728]: I0204 11:40:36.515348 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:36 crc kubenswrapper[4728]: I0204 11:40:36.517363 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-677949fd65-qpw7t" event={"ID":"4bd9f410-cd23-48ce-b65b-04573f621b0c","Type":"ContainerStarted","Data":"247eb524916bb8940d7da795b416703dd4e53a5a4be8740522026c5a615edf08"} Feb 04 11:40:36 crc kubenswrapper[4728]: I0204 11:40:36.535888 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" podStartSLOduration=2.169258876 podStartE2EDuration="4.535864147s" podCreationTimestamp="2026-02-04 11:40:32 +0000 UTC" firstStartedPulling="2026-02-04 11:40:33.593280398 +0000 UTC m=+782.735984773" lastFinishedPulling="2026-02-04 11:40:35.959885649 +0000 UTC m=+785.102590044" observedRunningTime="2026-02-04 11:40:36.525763627 +0000 UTC m=+785.668468002" watchObservedRunningTime="2026-02-04 11:40:36.535864147 +0000 UTC m=+785.678568542" Feb 04 11:40:36 crc kubenswrapper[4728]: I0204 11:40:36.551662 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-l4f7z" podStartSLOduration=1.479810791 podStartE2EDuration="4.551640469s" podCreationTimestamp="2026-02-04 11:40:32 +0000 UTC" firstStartedPulling="2026-02-04 11:40:32.856715412 +0000 UTC m=+781.999419797" lastFinishedPulling="2026-02-04 11:40:35.92854509 +0000 UTC m=+785.071249475" observedRunningTime="2026-02-04 11:40:36.542135063 +0000 UTC m=+785.684839458" watchObservedRunningTime="2026-02-04 11:40:36.551640469 +0000 UTC m=+785.694344854" Feb 04 11:40:36 crc kubenswrapper[4728]: I0204 11:40:36.762069 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:36 crc kubenswrapper[4728]: I0204 11:40:36.762131 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:36 crc kubenswrapper[4728]: I0204 11:40:36.820947 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:37 crc kubenswrapper[4728]: I0204 11:40:37.523819 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" event={"ID":"a9d1039a-1431-4b01-ac8a-173aba063825","Type":"ContainerStarted","Data":"4709fa49a8f7815887a1cd3a663b1fe28998142c6432cfa8e2d729e50cd1b9f1"} Feb 04 11:40:37 crc kubenswrapper[4728]: I0204 11:40:37.540262 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6f874f9768-trc2v" podStartSLOduration=2.61204118 podStartE2EDuration="5.540246882s" podCreationTimestamp="2026-02-04 11:40:32 +0000 UTC" firstStartedPulling="2026-02-04 11:40:33.943195145 +0000 UTC m=+783.085899530" lastFinishedPulling="2026-02-04 11:40:36.871400837 +0000 UTC m=+786.014105232" observedRunningTime="2026-02-04 11:40:37.537634927 +0000 UTC m=+786.680339302" watchObservedRunningTime="2026-02-04 11:40:37.540246882 +0000 UTC m=+786.682951267" Feb 04 11:40:37 crc kubenswrapper[4728]: I0204 11:40:37.569452 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:37 crc kubenswrapper[4728]: I0204 11:40:37.610304 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcm46"] Feb 04 11:40:38 crc kubenswrapper[4728]: I0204 11:40:38.533544 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-677949fd65-qpw7t" event={"ID":"4bd9f410-cd23-48ce-b65b-04573f621b0c","Type":"ContainerStarted","Data":"1d0ffa3db29e111eb13d8b7b5cb214e6556a5d570a49db74b3e1951bedabb045"} Feb 04 11:40:38 crc kubenswrapper[4728]: I0204 11:40:38.551526 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-677949fd65-qpw7t" podStartSLOduration=1.604134827 podStartE2EDuration="6.551505836s" podCreationTimestamp="2026-02-04 11:40:32 +0000 UTC" firstStartedPulling="2026-02-04 11:40:33.207803678 +0000 UTC m=+782.350508113" lastFinishedPulling="2026-02-04 11:40:38.155174737 +0000 UTC m=+787.297879122" observedRunningTime="2026-02-04 11:40:38.547572398 +0000 UTC m=+787.690276793" watchObservedRunningTime="2026-02-04 11:40:38.551505836 +0000 UTC m=+787.694210231" Feb 04 11:40:39 crc kubenswrapper[4728]: I0204 11:40:39.538911 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vcm46" podUID="f289245d-75bb-4d65-ada8-8657551179f4" containerName="registry-server" containerID="cri-o://75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5" gracePeriod=2 Feb 04 11:40:39 crc kubenswrapper[4728]: I0204 11:40:39.902891 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.055388 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-catalog-content\") pod \"f289245d-75bb-4d65-ada8-8657551179f4\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.055455 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqtqv\" (UniqueName: \"kubernetes.io/projected/f289245d-75bb-4d65-ada8-8657551179f4-kube-api-access-qqtqv\") pod \"f289245d-75bb-4d65-ada8-8657551179f4\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.055477 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-utilities\") pod \"f289245d-75bb-4d65-ada8-8657551179f4\" (UID: \"f289245d-75bb-4d65-ada8-8657551179f4\") " Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.056455 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-utilities" (OuterVolumeSpecName: "utilities") pod "f289245d-75bb-4d65-ada8-8657551179f4" (UID: "f289245d-75bb-4d65-ada8-8657551179f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.063434 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f289245d-75bb-4d65-ada8-8657551179f4-kube-api-access-qqtqv" (OuterVolumeSpecName: "kube-api-access-qqtqv") pod "f289245d-75bb-4d65-ada8-8657551179f4" (UID: "f289245d-75bb-4d65-ada8-8657551179f4"). InnerVolumeSpecName "kube-api-access-qqtqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.156528 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqtqv\" (UniqueName: \"kubernetes.io/projected/f289245d-75bb-4d65-ada8-8657551179f4-kube-api-access-qqtqv\") on node \"crc\" DevicePath \"\"" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.156624 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.430548 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f289245d-75bb-4d65-ada8-8657551179f4" (UID: "f289245d-75bb-4d65-ada8-8657551179f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.460690 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f289245d-75bb-4d65-ada8-8657551179f4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.546900 4728 generic.go:334] "Generic (PLEG): container finished" podID="f289245d-75bb-4d65-ada8-8657551179f4" containerID="75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5" exitCode=0 Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.546945 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcm46" event={"ID":"f289245d-75bb-4d65-ada8-8657551179f4","Type":"ContainerDied","Data":"75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5"} Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.546977 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcm46" event={"ID":"f289245d-75bb-4d65-ada8-8657551179f4","Type":"ContainerDied","Data":"9cb93e56c98da4cdcc45a8ad1eb2ed1cab218d93f05ddca7f979fcad6dfc7c2f"} Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.546993 4728 scope.go:117] "RemoveContainer" containerID="75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.546996 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcm46" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.564926 4728 scope.go:117] "RemoveContainer" containerID="3dfc66d30ed0bc1e47ed7f0b807b3fa86d2ebb26d890115c589aa6b372b05501" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.583975 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcm46"] Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.590475 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vcm46"] Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.612108 4728 scope.go:117] "RemoveContainer" containerID="83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.626490 4728 scope.go:117] "RemoveContainer" containerID="75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5" Feb 04 11:40:40 crc kubenswrapper[4728]: E0204 11:40:40.627090 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5\": container with ID starting with 75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5 not found: ID does not exist" containerID="75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.627139 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5"} err="failed to get container status \"75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5\": rpc error: code = NotFound desc = could not find container \"75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5\": container with ID starting with 75459e9dc4d0e93d0a323e0ca3312d00f8f89e32f3278dbdfbf797dfa84e31b5 not found: ID does not exist" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.627168 4728 scope.go:117] "RemoveContainer" containerID="3dfc66d30ed0bc1e47ed7f0b807b3fa86d2ebb26d890115c589aa6b372b05501" Feb 04 11:40:40 crc kubenswrapper[4728]: E0204 11:40:40.627511 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dfc66d30ed0bc1e47ed7f0b807b3fa86d2ebb26d890115c589aa6b372b05501\": container with ID starting with 3dfc66d30ed0bc1e47ed7f0b807b3fa86d2ebb26d890115c589aa6b372b05501 not found: ID does not exist" containerID="3dfc66d30ed0bc1e47ed7f0b807b3fa86d2ebb26d890115c589aa6b372b05501" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.627544 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dfc66d30ed0bc1e47ed7f0b807b3fa86d2ebb26d890115c589aa6b372b05501"} err="failed to get container status \"3dfc66d30ed0bc1e47ed7f0b807b3fa86d2ebb26d890115c589aa6b372b05501\": rpc error: code = NotFound desc = could not find container \"3dfc66d30ed0bc1e47ed7f0b807b3fa86d2ebb26d890115c589aa6b372b05501\": container with ID starting with 3dfc66d30ed0bc1e47ed7f0b807b3fa86d2ebb26d890115c589aa6b372b05501 not found: ID does not exist" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.627563 4728 scope.go:117] "RemoveContainer" containerID="83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9" Feb 04 11:40:40 crc kubenswrapper[4728]: E0204 11:40:40.627891 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9\": container with ID starting with 83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9 not found: ID does not exist" containerID="83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9" Feb 04 11:40:40 crc kubenswrapper[4728]: I0204 11:40:40.627916 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9"} err="failed to get container status \"83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9\": rpc error: code = NotFound desc = could not find container \"83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9\": container with ID starting with 83890d8b7f9b4fa5e1fab15b9cc3c6003a356814b35d1818063e17afd5a03dd9 not found: ID does not exist" Feb 04 11:40:41 crc kubenswrapper[4728]: I0204 11:40:41.558896 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f289245d-75bb-4d65-ada8-8657551179f4" path="/var/lib/kubelet/pods/f289245d-75bb-4d65-ada8-8657551179f4/volumes" Feb 04 11:40:42 crc kubenswrapper[4728]: I0204 11:40:42.864390 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-l4f7z" Feb 04 11:40:43 crc kubenswrapper[4728]: I0204 11:40:43.069304 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:43 crc kubenswrapper[4728]: I0204 11:40:43.069382 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:43 crc kubenswrapper[4728]: I0204 11:40:43.076541 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:43 crc kubenswrapper[4728]: I0204 11:40:43.572645 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69bc58b68d-jqccl" Feb 04 11:40:43 crc kubenswrapper[4728]: I0204 11:40:43.625293 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c4ckr"] Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.396815 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vhj6v"] Feb 04 11:40:45 crc kubenswrapper[4728]: E0204 11:40:45.397037 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f289245d-75bb-4d65-ada8-8657551179f4" containerName="extract-content" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.397049 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f289245d-75bb-4d65-ada8-8657551179f4" containerName="extract-content" Feb 04 11:40:45 crc kubenswrapper[4728]: E0204 11:40:45.397062 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f289245d-75bb-4d65-ada8-8657551179f4" containerName="registry-server" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.397070 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f289245d-75bb-4d65-ada8-8657551179f4" containerName="registry-server" Feb 04 11:40:45 crc kubenswrapper[4728]: E0204 11:40:45.397093 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f289245d-75bb-4d65-ada8-8657551179f4" containerName="extract-utilities" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.397101 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f289245d-75bb-4d65-ada8-8657551179f4" containerName="extract-utilities" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.397206 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f289245d-75bb-4d65-ada8-8657551179f4" containerName="registry-server" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.398002 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.405252 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhj6v"] Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.521683 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-utilities\") pod \"community-operators-vhj6v\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.521798 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-catalog-content\") pod \"community-operators-vhj6v\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.521875 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlcq9\" (UniqueName: \"kubernetes.io/projected/e46b3554-0974-4376-a255-4b1c25947f3a-kube-api-access-qlcq9\") pod \"community-operators-vhj6v\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.623301 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-utilities\") pod \"community-operators-vhj6v\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.623377 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-catalog-content\") pod \"community-operators-vhj6v\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.623476 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlcq9\" (UniqueName: \"kubernetes.io/projected/e46b3554-0974-4376-a255-4b1c25947f3a-kube-api-access-qlcq9\") pod \"community-operators-vhj6v\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.623828 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-utilities\") pod \"community-operators-vhj6v\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.623908 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-catalog-content\") pod \"community-operators-vhj6v\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.641738 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlcq9\" (UniqueName: \"kubernetes.io/projected/e46b3554-0974-4376-a255-4b1c25947f3a-kube-api-access-qlcq9\") pod \"community-operators-vhj6v\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.721874 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:45 crc kubenswrapper[4728]: I0204 11:40:45.945987 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhj6v"] Feb 04 11:40:46 crc kubenswrapper[4728]: I0204 11:40:46.588261 4728 generic.go:334] "Generic (PLEG): container finished" podID="e46b3554-0974-4376-a255-4b1c25947f3a" containerID="894dbed20c646abf8a451f8e921887cd8abb4f0908ad9633d841173e9920628f" exitCode=0 Feb 04 11:40:46 crc kubenswrapper[4728]: I0204 11:40:46.588369 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhj6v" event={"ID":"e46b3554-0974-4376-a255-4b1c25947f3a","Type":"ContainerDied","Data":"894dbed20c646abf8a451f8e921887cd8abb4f0908ad9633d841173e9920628f"} Feb 04 11:40:46 crc kubenswrapper[4728]: I0204 11:40:46.588613 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhj6v" event={"ID":"e46b3554-0974-4376-a255-4b1c25947f3a","Type":"ContainerStarted","Data":"0ff706404526b7ec8fbfd04a00573532f9c863fa983c6f829be0575ba9430122"} Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.184987 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tvbqm"] Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.191021 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.193146 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvbqm"] Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.257395 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-utilities\") pod \"redhat-marketplace-tvbqm\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.257563 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-catalog-content\") pod \"redhat-marketplace-tvbqm\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.257653 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvzgd\" (UniqueName: \"kubernetes.io/projected/c1c5f695-89de-44ef-bfc0-b977d07a51dd-kube-api-access-qvzgd\") pod \"redhat-marketplace-tvbqm\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.358243 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-utilities\") pod \"redhat-marketplace-tvbqm\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.358332 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-catalog-content\") pod \"redhat-marketplace-tvbqm\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.358384 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvzgd\" (UniqueName: \"kubernetes.io/projected/c1c5f695-89de-44ef-bfc0-b977d07a51dd-kube-api-access-qvzgd\") pod \"redhat-marketplace-tvbqm\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.358767 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-utilities\") pod \"redhat-marketplace-tvbqm\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.358846 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-catalog-content\") pod \"redhat-marketplace-tvbqm\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.383945 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvzgd\" (UniqueName: \"kubernetes.io/projected/c1c5f695-89de-44ef-bfc0-b977d07a51dd-kube-api-access-qvzgd\") pod \"redhat-marketplace-tvbqm\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.533450 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:48 crc kubenswrapper[4728]: I0204 11:40:48.953774 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvbqm"] Feb 04 11:40:49 crc kubenswrapper[4728]: I0204 11:40:49.607381 4728 generic.go:334] "Generic (PLEG): container finished" podID="e46b3554-0974-4376-a255-4b1c25947f3a" containerID="62aaafd966b5910b80a330a42b12b1c9c2f432ddcdc5d9407e3af89f90d85c8d" exitCode=0 Feb 04 11:40:49 crc kubenswrapper[4728]: I0204 11:40:49.607444 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhj6v" event={"ID":"e46b3554-0974-4376-a255-4b1c25947f3a","Type":"ContainerDied","Data":"62aaafd966b5910b80a330a42b12b1c9c2f432ddcdc5d9407e3af89f90d85c8d"} Feb 04 11:40:49 crc kubenswrapper[4728]: I0204 11:40:49.609709 4728 generic.go:334] "Generic (PLEG): container finished" podID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" containerID="3f2ac0c54287d3809b9a14dba03cd69f72dcbb52dde22af706740f5e01d1b054" exitCode=0 Feb 04 11:40:49 crc kubenswrapper[4728]: I0204 11:40:49.609738 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvbqm" event={"ID":"c1c5f695-89de-44ef-bfc0-b977d07a51dd","Type":"ContainerDied","Data":"3f2ac0c54287d3809b9a14dba03cd69f72dcbb52dde22af706740f5e01d1b054"} Feb 04 11:40:49 crc kubenswrapper[4728]: I0204 11:40:49.609806 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvbqm" event={"ID":"c1c5f695-89de-44ef-bfc0-b977d07a51dd","Type":"ContainerStarted","Data":"f6eb6f1c18ace98aad1296a3ab365fff801db55c74471b9d9cf5ceee9cbd057d"} Feb 04 11:40:50 crc kubenswrapper[4728]: I0204 11:40:50.619106 4728 generic.go:334] "Generic (PLEG): container finished" podID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" containerID="37136b2c7bfd33a925d7e4f5ed4576e1f212aad7967eb35657ced426f7a879fc" exitCode=0 Feb 04 11:40:50 crc kubenswrapper[4728]: I0204 11:40:50.619281 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvbqm" event={"ID":"c1c5f695-89de-44ef-bfc0-b977d07a51dd","Type":"ContainerDied","Data":"37136b2c7bfd33a925d7e4f5ed4576e1f212aad7967eb35657ced426f7a879fc"} Feb 04 11:40:50 crc kubenswrapper[4728]: I0204 11:40:50.623681 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhj6v" event={"ID":"e46b3554-0974-4376-a255-4b1c25947f3a","Type":"ContainerStarted","Data":"804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168"} Feb 04 11:40:50 crc kubenswrapper[4728]: I0204 11:40:50.656300 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vhj6v" podStartSLOduration=2.234104633 podStartE2EDuration="5.656286109s" podCreationTimestamp="2026-02-04 11:40:45 +0000 UTC" firstStartedPulling="2026-02-04 11:40:46.58989044 +0000 UTC m=+795.732594825" lastFinishedPulling="2026-02-04 11:40:50.012071886 +0000 UTC m=+799.154776301" observedRunningTime="2026-02-04 11:40:50.656195427 +0000 UTC m=+799.798899832" watchObservedRunningTime="2026-02-04 11:40:50.656286109 +0000 UTC m=+799.798990484" Feb 04 11:40:51 crc kubenswrapper[4728]: I0204 11:40:51.631958 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvbqm" event={"ID":"c1c5f695-89de-44ef-bfc0-b977d07a51dd","Type":"ContainerStarted","Data":"18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe"} Feb 04 11:40:51 crc kubenswrapper[4728]: I0204 11:40:51.654398 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tvbqm" podStartSLOduration=2.25792494 podStartE2EDuration="3.654377537s" podCreationTimestamp="2026-02-04 11:40:48 +0000 UTC" firstStartedPulling="2026-02-04 11:40:49.612923057 +0000 UTC m=+798.755627442" lastFinishedPulling="2026-02-04 11:40:51.009375654 +0000 UTC m=+800.152080039" observedRunningTime="2026-02-04 11:40:51.649013944 +0000 UTC m=+800.791718369" watchObservedRunningTime="2026-02-04 11:40:51.654377537 +0000 UTC m=+800.797081922" Feb 04 11:40:53 crc kubenswrapper[4728]: I0204 11:40:53.390728 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-bd5678b45-jppz6" Feb 04 11:40:55 crc kubenswrapper[4728]: I0204 11:40:55.722014 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:55 crc kubenswrapper[4728]: I0204 11:40:55.722812 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:55 crc kubenswrapper[4728]: I0204 11:40:55.797569 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:56 crc kubenswrapper[4728]: I0204 11:40:56.696407 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:40:57 crc kubenswrapper[4728]: I0204 11:40:57.974383 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vhj6v"] Feb 04 11:40:58 crc kubenswrapper[4728]: I0204 11:40:58.533710 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:58 crc kubenswrapper[4728]: I0204 11:40:58.533926 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:58 crc kubenswrapper[4728]: I0204 11:40:58.583962 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:58 crc kubenswrapper[4728]: I0204 11:40:58.713911 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:40:59 crc kubenswrapper[4728]: I0204 11:40:59.672627 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vhj6v" podUID="e46b3554-0974-4376-a255-4b1c25947f3a" containerName="registry-server" containerID="cri-o://804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168" gracePeriod=2 Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.146529 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.310868 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlcq9\" (UniqueName: \"kubernetes.io/projected/e46b3554-0974-4376-a255-4b1c25947f3a-kube-api-access-qlcq9\") pod \"e46b3554-0974-4376-a255-4b1c25947f3a\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.310947 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-catalog-content\") pod \"e46b3554-0974-4376-a255-4b1c25947f3a\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.311089 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-utilities\") pod \"e46b3554-0974-4376-a255-4b1c25947f3a\" (UID: \"e46b3554-0974-4376-a255-4b1c25947f3a\") " Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.312160 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-utilities" (OuterVolumeSpecName: "utilities") pod "e46b3554-0974-4376-a255-4b1c25947f3a" (UID: "e46b3554-0974-4376-a255-4b1c25947f3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.317942 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e46b3554-0974-4376-a255-4b1c25947f3a-kube-api-access-qlcq9" (OuterVolumeSpecName: "kube-api-access-qlcq9") pod "e46b3554-0974-4376-a255-4b1c25947f3a" (UID: "e46b3554-0974-4376-a255-4b1c25947f3a"). InnerVolumeSpecName "kube-api-access-qlcq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.412701 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.412737 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlcq9\" (UniqueName: \"kubernetes.io/projected/e46b3554-0974-4376-a255-4b1c25947f3a-kube-api-access-qlcq9\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.572645 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvbqm"] Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.681136 4728 generic.go:334] "Generic (PLEG): container finished" podID="e46b3554-0974-4376-a255-4b1c25947f3a" containerID="804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168" exitCode=0 Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.682111 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tvbqm" podUID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" containerName="registry-server" containerID="cri-o://18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe" gracePeriod=2 Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.682606 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhj6v" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.683054 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhj6v" event={"ID":"e46b3554-0974-4376-a255-4b1c25947f3a","Type":"ContainerDied","Data":"804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168"} Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.683110 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhj6v" event={"ID":"e46b3554-0974-4376-a255-4b1c25947f3a","Type":"ContainerDied","Data":"0ff706404526b7ec8fbfd04a00573532f9c863fa983c6f829be0575ba9430122"} Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.683141 4728 scope.go:117] "RemoveContainer" containerID="804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.707387 4728 scope.go:117] "RemoveContainer" containerID="62aaafd966b5910b80a330a42b12b1c9c2f432ddcdc5d9407e3af89f90d85c8d" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.722601 4728 scope.go:117] "RemoveContainer" containerID="894dbed20c646abf8a451f8e921887cd8abb4f0908ad9633d841173e9920628f" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.739500 4728 scope.go:117] "RemoveContainer" containerID="804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168" Feb 04 11:41:00 crc kubenswrapper[4728]: E0204 11:41:00.740250 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168\": container with ID starting with 804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168 not found: ID does not exist" containerID="804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.740298 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168"} err="failed to get container status \"804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168\": rpc error: code = NotFound desc = could not find container \"804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168\": container with ID starting with 804fa67558d1e40481a5a4b55bda4d887a3805b4c8d87a73528098abafee1168 not found: ID does not exist" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.740319 4728 scope.go:117] "RemoveContainer" containerID="62aaafd966b5910b80a330a42b12b1c9c2f432ddcdc5d9407e3af89f90d85c8d" Feb 04 11:41:00 crc kubenswrapper[4728]: E0204 11:41:00.740817 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62aaafd966b5910b80a330a42b12b1c9c2f432ddcdc5d9407e3af89f90d85c8d\": container with ID starting with 62aaafd966b5910b80a330a42b12b1c9c2f432ddcdc5d9407e3af89f90d85c8d not found: ID does not exist" containerID="62aaafd966b5910b80a330a42b12b1c9c2f432ddcdc5d9407e3af89f90d85c8d" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.740860 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62aaafd966b5910b80a330a42b12b1c9c2f432ddcdc5d9407e3af89f90d85c8d"} err="failed to get container status \"62aaafd966b5910b80a330a42b12b1c9c2f432ddcdc5d9407e3af89f90d85c8d\": rpc error: code = NotFound desc = could not find container \"62aaafd966b5910b80a330a42b12b1c9c2f432ddcdc5d9407e3af89f90d85c8d\": container with ID starting with 62aaafd966b5910b80a330a42b12b1c9c2f432ddcdc5d9407e3af89f90d85c8d not found: ID does not exist" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.740890 4728 scope.go:117] "RemoveContainer" containerID="894dbed20c646abf8a451f8e921887cd8abb4f0908ad9633d841173e9920628f" Feb 04 11:41:00 crc kubenswrapper[4728]: E0204 11:41:00.741430 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"894dbed20c646abf8a451f8e921887cd8abb4f0908ad9633d841173e9920628f\": container with ID starting with 894dbed20c646abf8a451f8e921887cd8abb4f0908ad9633d841173e9920628f not found: ID does not exist" containerID="894dbed20c646abf8a451f8e921887cd8abb4f0908ad9633d841173e9920628f" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.741508 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894dbed20c646abf8a451f8e921887cd8abb4f0908ad9633d841173e9920628f"} err="failed to get container status \"894dbed20c646abf8a451f8e921887cd8abb4f0908ad9633d841173e9920628f\": rpc error: code = NotFound desc = could not find container \"894dbed20c646abf8a451f8e921887cd8abb4f0908ad9633d841173e9920628f\": container with ID starting with 894dbed20c646abf8a451f8e921887cd8abb4f0908ad9633d841173e9920628f not found: ID does not exist" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.912165 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e46b3554-0974-4376-a255-4b1c25947f3a" (UID: "e46b3554-0974-4376-a255-4b1c25947f3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:41:00 crc kubenswrapper[4728]: I0204 11:41:00.919159 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e46b3554-0974-4376-a255-4b1c25947f3a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.011186 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vhj6v"] Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.016009 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vhj6v"] Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.563355 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e46b3554-0974-4376-a255-4b1c25947f3a" path="/var/lib/kubelet/pods/e46b3554-0974-4376-a255-4b1c25947f3a/volumes" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.573812 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.690154 4728 generic.go:334] "Generic (PLEG): container finished" podID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" containerID="18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe" exitCode=0 Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.690224 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvbqm" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.690240 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvbqm" event={"ID":"c1c5f695-89de-44ef-bfc0-b977d07a51dd","Type":"ContainerDied","Data":"18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe"} Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.690299 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvbqm" event={"ID":"c1c5f695-89de-44ef-bfc0-b977d07a51dd","Type":"ContainerDied","Data":"f6eb6f1c18ace98aad1296a3ab365fff801db55c74471b9d9cf5ceee9cbd057d"} Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.690326 4728 scope.go:117] "RemoveContainer" containerID="18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.723476 4728 scope.go:117] "RemoveContainer" containerID="37136b2c7bfd33a925d7e4f5ed4576e1f212aad7967eb35657ced426f7a879fc" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.729550 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-utilities\") pod \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.729602 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-catalog-content\") pod \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.729628 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvzgd\" (UniqueName: \"kubernetes.io/projected/c1c5f695-89de-44ef-bfc0-b977d07a51dd-kube-api-access-qvzgd\") pod \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\" (UID: \"c1c5f695-89de-44ef-bfc0-b977d07a51dd\") " Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.730601 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-utilities" (OuterVolumeSpecName: "utilities") pod "c1c5f695-89de-44ef-bfc0-b977d07a51dd" (UID: "c1c5f695-89de-44ef-bfc0-b977d07a51dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.743873 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c5f695-89de-44ef-bfc0-b977d07a51dd-kube-api-access-qvzgd" (OuterVolumeSpecName: "kube-api-access-qvzgd") pod "c1c5f695-89de-44ef-bfc0-b977d07a51dd" (UID: "c1c5f695-89de-44ef-bfc0-b977d07a51dd"). InnerVolumeSpecName "kube-api-access-qvzgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.748415 4728 scope.go:117] "RemoveContainer" containerID="3f2ac0c54287d3809b9a14dba03cd69f72dcbb52dde22af706740f5e01d1b054" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.761588 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1c5f695-89de-44ef-bfc0-b977d07a51dd" (UID: "c1c5f695-89de-44ef-bfc0-b977d07a51dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.788138 4728 scope.go:117] "RemoveContainer" containerID="18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe" Feb 04 11:41:01 crc kubenswrapper[4728]: E0204 11:41:01.788567 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe\": container with ID starting with 18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe not found: ID does not exist" containerID="18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.788611 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe"} err="failed to get container status \"18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe\": rpc error: code = NotFound desc = could not find container \"18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe\": container with ID starting with 18584d7e108f49f86f6cb9d9145733d3aacff4baeeedb47272dbb8058ba994fe not found: ID does not exist" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.788632 4728 scope.go:117] "RemoveContainer" containerID="37136b2c7bfd33a925d7e4f5ed4576e1f212aad7967eb35657ced426f7a879fc" Feb 04 11:41:01 crc kubenswrapper[4728]: E0204 11:41:01.788914 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37136b2c7bfd33a925d7e4f5ed4576e1f212aad7967eb35657ced426f7a879fc\": container with ID starting with 37136b2c7bfd33a925d7e4f5ed4576e1f212aad7967eb35657ced426f7a879fc not found: ID does not exist" containerID="37136b2c7bfd33a925d7e4f5ed4576e1f212aad7967eb35657ced426f7a879fc" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.788937 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37136b2c7bfd33a925d7e4f5ed4576e1f212aad7967eb35657ced426f7a879fc"} err="failed to get container status \"37136b2c7bfd33a925d7e4f5ed4576e1f212aad7967eb35657ced426f7a879fc\": rpc error: code = NotFound desc = could not find container \"37136b2c7bfd33a925d7e4f5ed4576e1f212aad7967eb35657ced426f7a879fc\": container with ID starting with 37136b2c7bfd33a925d7e4f5ed4576e1f212aad7967eb35657ced426f7a879fc not found: ID does not exist" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.788950 4728 scope.go:117] "RemoveContainer" containerID="3f2ac0c54287d3809b9a14dba03cd69f72dcbb52dde22af706740f5e01d1b054" Feb 04 11:41:01 crc kubenswrapper[4728]: E0204 11:41:01.789264 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2ac0c54287d3809b9a14dba03cd69f72dcbb52dde22af706740f5e01d1b054\": container with ID starting with 3f2ac0c54287d3809b9a14dba03cd69f72dcbb52dde22af706740f5e01d1b054 not found: ID does not exist" containerID="3f2ac0c54287d3809b9a14dba03cd69f72dcbb52dde22af706740f5e01d1b054" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.789286 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2ac0c54287d3809b9a14dba03cd69f72dcbb52dde22af706740f5e01d1b054"} err="failed to get container status \"3f2ac0c54287d3809b9a14dba03cd69f72dcbb52dde22af706740f5e01d1b054\": rpc error: code = NotFound desc = could not find container \"3f2ac0c54287d3809b9a14dba03cd69f72dcbb52dde22af706740f5e01d1b054\": container with ID starting with 3f2ac0c54287d3809b9a14dba03cd69f72dcbb52dde22af706740f5e01d1b054 not found: ID does not exist" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.830781 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.830814 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c5f695-89de-44ef-bfc0-b977d07a51dd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:01 crc kubenswrapper[4728]: I0204 11:41:01.830826 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvzgd\" (UniqueName: \"kubernetes.io/projected/c1c5f695-89de-44ef-bfc0-b977d07a51dd-kube-api-access-qvzgd\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:02 crc kubenswrapper[4728]: I0204 11:41:02.028125 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvbqm"] Feb 04 11:41:02 crc kubenswrapper[4728]: I0204 11:41:02.033509 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvbqm"] Feb 04 11:41:03 crc kubenswrapper[4728]: I0204 11:41:03.560647 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" path="/var/lib/kubelet/pods/c1c5f695-89de-44ef-bfc0-b977d07a51dd/volumes" Feb 04 11:41:05 crc kubenswrapper[4728]: I0204 11:41:05.448690 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:41:05 crc kubenswrapper[4728]: I0204 11:41:05.449207 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:41:05 crc kubenswrapper[4728]: I0204 11:41:05.449277 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:41:05 crc kubenswrapper[4728]: I0204 11:41:05.450181 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1141676880e32a0c8de5aba6aaf202ec56fa7791680367a5b1bd8fc7c075b2b"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 11:41:05 crc kubenswrapper[4728]: I0204 11:41:05.450289 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://d1141676880e32a0c8de5aba6aaf202ec56fa7791680367a5b1bd8fc7c075b2b" gracePeriod=600 Feb 04 11:41:05 crc kubenswrapper[4728]: I0204 11:41:05.717485 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="d1141676880e32a0c8de5aba6aaf202ec56fa7791680367a5b1bd8fc7c075b2b" exitCode=0 Feb 04 11:41:05 crc kubenswrapper[4728]: I0204 11:41:05.717569 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"d1141676880e32a0c8de5aba6aaf202ec56fa7791680367a5b1bd8fc7c075b2b"} Feb 04 11:41:05 crc kubenswrapper[4728]: I0204 11:41:05.717893 4728 scope.go:117] "RemoveContainer" containerID="44bf5f747d4965ce6618c46b58c4eacb171a54b6f11bb718ba6061de1fa3a0cc" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.725466 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"57f05c207a10ae4fedd99430e02b6ac5fa7f5bce4ce363fc2daec5fefcb4c117"} Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.901851 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt"] Feb 04 11:41:06 crc kubenswrapper[4728]: E0204 11:41:06.902094 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46b3554-0974-4376-a255-4b1c25947f3a" containerName="extract-content" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.902108 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46b3554-0974-4376-a255-4b1c25947f3a" containerName="extract-content" Feb 04 11:41:06 crc kubenswrapper[4728]: E0204 11:41:06.902123 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" containerName="extract-utilities" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.902130 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" containerName="extract-utilities" Feb 04 11:41:06 crc kubenswrapper[4728]: E0204 11:41:06.902141 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" containerName="extract-content" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.902149 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" containerName="extract-content" Feb 04 11:41:06 crc kubenswrapper[4728]: E0204 11:41:06.902159 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46b3554-0974-4376-a255-4b1c25947f3a" containerName="registry-server" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.902165 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46b3554-0974-4376-a255-4b1c25947f3a" containerName="registry-server" Feb 04 11:41:06 crc kubenswrapper[4728]: E0204 11:41:06.902175 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" containerName="registry-server" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.902182 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" containerName="registry-server" Feb 04 11:41:06 crc kubenswrapper[4728]: E0204 11:41:06.902192 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46b3554-0974-4376-a255-4b1c25947f3a" containerName="extract-utilities" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.902199 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46b3554-0974-4376-a255-4b1c25947f3a" containerName="extract-utilities" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.902335 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c5f695-89de-44ef-bfc0-b977d07a51dd" containerName="registry-server" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.902348 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e46b3554-0974-4376-a255-4b1c25947f3a" containerName="registry-server" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.904843 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.908158 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.912726 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt"] Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.995037 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-bundle\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.995081 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-util\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:06 crc kubenswrapper[4728]: I0204 11:41:06.995202 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzrk7\" (UniqueName: \"kubernetes.io/projected/23817eeb-0169-4a0b-bc08-5d83377e17b2-kube-api-access-kzrk7\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:07 crc kubenswrapper[4728]: I0204 11:41:07.096424 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-bundle\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:07 crc kubenswrapper[4728]: I0204 11:41:07.096730 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-util\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:07 crc kubenswrapper[4728]: I0204 11:41:07.096911 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzrk7\" (UniqueName: \"kubernetes.io/projected/23817eeb-0169-4a0b-bc08-5d83377e17b2-kube-api-access-kzrk7\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:07 crc kubenswrapper[4728]: I0204 11:41:07.097046 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-bundle\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:07 crc kubenswrapper[4728]: I0204 11:41:07.097162 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-util\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:07 crc kubenswrapper[4728]: I0204 11:41:07.124498 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzrk7\" (UniqueName: \"kubernetes.io/projected/23817eeb-0169-4a0b-bc08-5d83377e17b2-kube-api-access-kzrk7\") pod \"b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:07 crc kubenswrapper[4728]: I0204 11:41:07.227854 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:07 crc kubenswrapper[4728]: I0204 11:41:07.429308 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt"] Feb 04 11:41:07 crc kubenswrapper[4728]: W0204 11:41:07.437209 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23817eeb_0169_4a0b_bc08_5d83377e17b2.slice/crio-91f181bef7191faebc32fb69e20ac9b203a80ec567a1e5a53493beb476e7987d WatchSource:0}: Error finding container 91f181bef7191faebc32fb69e20ac9b203a80ec567a1e5a53493beb476e7987d: Status 404 returned error can't find the container with id 91f181bef7191faebc32fb69e20ac9b203a80ec567a1e5a53493beb476e7987d Feb 04 11:41:07 crc kubenswrapper[4728]: I0204 11:41:07.733232 4728 generic.go:334] "Generic (PLEG): container finished" podID="23817eeb-0169-4a0b-bc08-5d83377e17b2" containerID="7772c332655760743b80a64217dc1a5302385d8e0683c13132f458aee09f96e5" exitCode=0 Feb 04 11:41:07 crc kubenswrapper[4728]: I0204 11:41:07.733319 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" event={"ID":"23817eeb-0169-4a0b-bc08-5d83377e17b2","Type":"ContainerDied","Data":"7772c332655760743b80a64217dc1a5302385d8e0683c13132f458aee09f96e5"} Feb 04 11:41:07 crc kubenswrapper[4728]: I0204 11:41:07.733489 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" event={"ID":"23817eeb-0169-4a0b-bc08-5d83377e17b2","Type":"ContainerStarted","Data":"91f181bef7191faebc32fb69e20ac9b203a80ec567a1e5a53493beb476e7987d"} Feb 04 11:41:08 crc kubenswrapper[4728]: I0204 11:41:08.661404 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-c4ckr" podUID="86a5137c-eb55-438a-8e8d-99f2a2d4bf48" containerName="console" containerID="cri-o://eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897" gracePeriod=15 Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.015681 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c4ckr_86a5137c-eb55-438a-8e8d-99f2a2d4bf48/console/0.log" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.016075 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.121166 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-serving-cert\") pod \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.121220 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-trusted-ca-bundle\") pod \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.121250 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-service-ca\") pod \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.121289 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-oauth-serving-cert\") pod \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.121308 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-oauth-config\") pod \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.121335 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-config\") pod \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.121353 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw56n\" (UniqueName: \"kubernetes.io/projected/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-kube-api-access-nw56n\") pod \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\" (UID: \"86a5137c-eb55-438a-8e8d-99f2a2d4bf48\") " Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.122129 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "86a5137c-eb55-438a-8e8d-99f2a2d4bf48" (UID: "86a5137c-eb55-438a-8e8d-99f2a2d4bf48"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.122176 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "86a5137c-eb55-438a-8e8d-99f2a2d4bf48" (UID: "86a5137c-eb55-438a-8e8d-99f2a2d4bf48"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.122740 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-service-ca" (OuterVolumeSpecName: "service-ca") pod "86a5137c-eb55-438a-8e8d-99f2a2d4bf48" (UID: "86a5137c-eb55-438a-8e8d-99f2a2d4bf48"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.122729 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-config" (OuterVolumeSpecName: "console-config") pod "86a5137c-eb55-438a-8e8d-99f2a2d4bf48" (UID: "86a5137c-eb55-438a-8e8d-99f2a2d4bf48"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.127392 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "86a5137c-eb55-438a-8e8d-99f2a2d4bf48" (UID: "86a5137c-eb55-438a-8e8d-99f2a2d4bf48"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.127702 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "86a5137c-eb55-438a-8e8d-99f2a2d4bf48" (UID: "86a5137c-eb55-438a-8e8d-99f2a2d4bf48"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.128041 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-kube-api-access-nw56n" (OuterVolumeSpecName: "kube-api-access-nw56n") pod "86a5137c-eb55-438a-8e8d-99f2a2d4bf48" (UID: "86a5137c-eb55-438a-8e8d-99f2a2d4bf48"). InnerVolumeSpecName "kube-api-access-nw56n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.223310 4728 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.223361 4728 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.223373 4728 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.223384 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw56n\" (UniqueName: \"kubernetes.io/projected/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-kube-api-access-nw56n\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.223396 4728 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.223407 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.223418 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a5137c-eb55-438a-8e8d-99f2a2d4bf48-service-ca\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.755975 4728 generic.go:334] "Generic (PLEG): container finished" podID="23817eeb-0169-4a0b-bc08-5d83377e17b2" containerID="f3a4f4014c06ef9d47c64bda014bb044ed3b825cae8f623835599833cadd7fed" exitCode=0 Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.755983 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" event={"ID":"23817eeb-0169-4a0b-bc08-5d83377e17b2","Type":"ContainerDied","Data":"f3a4f4014c06ef9d47c64bda014bb044ed3b825cae8f623835599833cadd7fed"} Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.759442 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c4ckr_86a5137c-eb55-438a-8e8d-99f2a2d4bf48/console/0.log" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.759490 4728 generic.go:334] "Generic (PLEG): container finished" podID="86a5137c-eb55-438a-8e8d-99f2a2d4bf48" containerID="eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897" exitCode=2 Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.759525 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4ckr" event={"ID":"86a5137c-eb55-438a-8e8d-99f2a2d4bf48","Type":"ContainerDied","Data":"eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897"} Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.759554 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4ckr" event={"ID":"86a5137c-eb55-438a-8e8d-99f2a2d4bf48","Type":"ContainerDied","Data":"0f18ba12a390b3029b68df341aca4e29a1e7dc264a8ac5e0f826be6032b15cbd"} Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.759574 4728 scope.go:117] "RemoveContainer" containerID="eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.759598 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4ckr" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.777922 4728 scope.go:117] "RemoveContainer" containerID="eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897" Feb 04 11:41:09 crc kubenswrapper[4728]: E0204 11:41:09.781584 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897\": container with ID starting with eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897 not found: ID does not exist" containerID="eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.781659 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897"} err="failed to get container status \"eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897\": rpc error: code = NotFound desc = could not find container \"eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897\": container with ID starting with eed749c945dee033ad4efea7b7704b1e87bea855db34edbc13cd80b9363f6897 not found: ID does not exist" Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.788795 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c4ckr"] Feb 04 11:41:09 crc kubenswrapper[4728]: I0204 11:41:09.796286 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-c4ckr"] Feb 04 11:41:10 crc kubenswrapper[4728]: I0204 11:41:10.774411 4728 generic.go:334] "Generic (PLEG): container finished" podID="23817eeb-0169-4a0b-bc08-5d83377e17b2" containerID="de51b51ee5fa09766b707c77046261268bd1ce2e17bffab98b09f419212e10c6" exitCode=0 Feb 04 11:41:10 crc kubenswrapper[4728]: I0204 11:41:10.774455 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" event={"ID":"23817eeb-0169-4a0b-bc08-5d83377e17b2","Type":"ContainerDied","Data":"de51b51ee5fa09766b707c77046261268bd1ce2e17bffab98b09f419212e10c6"} Feb 04 11:41:11 crc kubenswrapper[4728]: I0204 11:41:11.561723 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a5137c-eb55-438a-8e8d-99f2a2d4bf48" path="/var/lib/kubelet/pods/86a5137c-eb55-438a-8e8d-99f2a2d4bf48/volumes" Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.013801 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.159333 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzrk7\" (UniqueName: \"kubernetes.io/projected/23817eeb-0169-4a0b-bc08-5d83377e17b2-kube-api-access-kzrk7\") pod \"23817eeb-0169-4a0b-bc08-5d83377e17b2\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.159697 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-util\") pod \"23817eeb-0169-4a0b-bc08-5d83377e17b2\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.159817 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-bundle\") pod \"23817eeb-0169-4a0b-bc08-5d83377e17b2\" (UID: \"23817eeb-0169-4a0b-bc08-5d83377e17b2\") " Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.161055 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-bundle" (OuterVolumeSpecName: "bundle") pod "23817eeb-0169-4a0b-bc08-5d83377e17b2" (UID: "23817eeb-0169-4a0b-bc08-5d83377e17b2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.163998 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23817eeb-0169-4a0b-bc08-5d83377e17b2-kube-api-access-kzrk7" (OuterVolumeSpecName: "kube-api-access-kzrk7") pod "23817eeb-0169-4a0b-bc08-5d83377e17b2" (UID: "23817eeb-0169-4a0b-bc08-5d83377e17b2"). InnerVolumeSpecName "kube-api-access-kzrk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.181032 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-util" (OuterVolumeSpecName: "util") pod "23817eeb-0169-4a0b-bc08-5d83377e17b2" (UID: "23817eeb-0169-4a0b-bc08-5d83377e17b2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.261251 4728 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-util\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.261313 4728 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/23817eeb-0169-4a0b-bc08-5d83377e17b2-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.261333 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzrk7\" (UniqueName: \"kubernetes.io/projected/23817eeb-0169-4a0b-bc08-5d83377e17b2-kube-api-access-kzrk7\") on node \"crc\" DevicePath \"\"" Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.791835 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" event={"ID":"23817eeb-0169-4a0b-bc08-5d83377e17b2","Type":"ContainerDied","Data":"91f181bef7191faebc32fb69e20ac9b203a80ec567a1e5a53493beb476e7987d"} Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.791876 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91f181bef7191faebc32fb69e20ac9b203a80ec567a1e5a53493beb476e7987d" Feb 04 11:41:12 crc kubenswrapper[4728]: I0204 11:41:12.792032 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.245167 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h"] Feb 04 11:41:20 crc kubenswrapper[4728]: E0204 11:41:20.245652 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23817eeb-0169-4a0b-bc08-5d83377e17b2" containerName="pull" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.245663 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="23817eeb-0169-4a0b-bc08-5d83377e17b2" containerName="pull" Feb 04 11:41:20 crc kubenswrapper[4728]: E0204 11:41:20.245679 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23817eeb-0169-4a0b-bc08-5d83377e17b2" containerName="extract" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.245685 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="23817eeb-0169-4a0b-bc08-5d83377e17b2" containerName="extract" Feb 04 11:41:20 crc kubenswrapper[4728]: E0204 11:41:20.245697 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23817eeb-0169-4a0b-bc08-5d83377e17b2" containerName="util" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.245703 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="23817eeb-0169-4a0b-bc08-5d83377e17b2" containerName="util" Feb 04 11:41:20 crc kubenswrapper[4728]: E0204 11:41:20.245711 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a5137c-eb55-438a-8e8d-99f2a2d4bf48" containerName="console" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.245718 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a5137c-eb55-438a-8e8d-99f2a2d4bf48" containerName="console" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.245876 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="23817eeb-0169-4a0b-bc08-5d83377e17b2" containerName="extract" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.245892 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a5137c-eb55-438a-8e8d-99f2a2d4bf48" containerName="console" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.246232 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.249032 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bd262" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.249969 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.250006 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.250087 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.250288 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.258305 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h"] Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.357194 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5066fff-a329-4c7b-a70f-cee08caa3393-apiservice-cert\") pod \"metallb-operator-controller-manager-7ffd8d88fd-dfn9h\" (UID: \"c5066fff-a329-4c7b-a70f-cee08caa3393\") " pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.357543 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwp9v\" (UniqueName: \"kubernetes.io/projected/c5066fff-a329-4c7b-a70f-cee08caa3393-kube-api-access-cwp9v\") pod \"metallb-operator-controller-manager-7ffd8d88fd-dfn9h\" (UID: \"c5066fff-a329-4c7b-a70f-cee08caa3393\") " pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.357712 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5066fff-a329-4c7b-a70f-cee08caa3393-webhook-cert\") pod \"metallb-operator-controller-manager-7ffd8d88fd-dfn9h\" (UID: \"c5066fff-a329-4c7b-a70f-cee08caa3393\") " pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.458742 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5066fff-a329-4c7b-a70f-cee08caa3393-webhook-cert\") pod \"metallb-operator-controller-manager-7ffd8d88fd-dfn9h\" (UID: \"c5066fff-a329-4c7b-a70f-cee08caa3393\") " pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.459106 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5066fff-a329-4c7b-a70f-cee08caa3393-apiservice-cert\") pod \"metallb-operator-controller-manager-7ffd8d88fd-dfn9h\" (UID: \"c5066fff-a329-4c7b-a70f-cee08caa3393\") " pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.459212 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwp9v\" (UniqueName: \"kubernetes.io/projected/c5066fff-a329-4c7b-a70f-cee08caa3393-kube-api-access-cwp9v\") pod \"metallb-operator-controller-manager-7ffd8d88fd-dfn9h\" (UID: \"c5066fff-a329-4c7b-a70f-cee08caa3393\") " pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.469837 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5066fff-a329-4c7b-a70f-cee08caa3393-apiservice-cert\") pod \"metallb-operator-controller-manager-7ffd8d88fd-dfn9h\" (UID: \"c5066fff-a329-4c7b-a70f-cee08caa3393\") " pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.470058 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5066fff-a329-4c7b-a70f-cee08caa3393-webhook-cert\") pod \"metallb-operator-controller-manager-7ffd8d88fd-dfn9h\" (UID: \"c5066fff-a329-4c7b-a70f-cee08caa3393\") " pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.478292 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwp9v\" (UniqueName: \"kubernetes.io/projected/c5066fff-a329-4c7b-a70f-cee08caa3393-kube-api-access-cwp9v\") pod \"metallb-operator-controller-manager-7ffd8d88fd-dfn9h\" (UID: \"c5066fff-a329-4c7b-a70f-cee08caa3393\") " pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.520284 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-868877877f-4z2fv"] Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.521314 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.523890 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.523930 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-fg5gf" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.523892 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.534599 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-868877877f-4z2fv"] Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.567117 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.664209 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbxpn\" (UniqueName: \"kubernetes.io/projected/1cc0e7f3-e154-4508-b731-34c7b2e5cd6e-kube-api-access-nbxpn\") pod \"metallb-operator-webhook-server-868877877f-4z2fv\" (UID: \"1cc0e7f3-e154-4508-b731-34c7b2e5cd6e\") " pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.664661 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc0e7f3-e154-4508-b731-34c7b2e5cd6e-webhook-cert\") pod \"metallb-operator-webhook-server-868877877f-4z2fv\" (UID: \"1cc0e7f3-e154-4508-b731-34c7b2e5cd6e\") " pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.664694 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc0e7f3-e154-4508-b731-34c7b2e5cd6e-apiservice-cert\") pod \"metallb-operator-webhook-server-868877877f-4z2fv\" (UID: \"1cc0e7f3-e154-4508-b731-34c7b2e5cd6e\") " pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.765344 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbxpn\" (UniqueName: \"kubernetes.io/projected/1cc0e7f3-e154-4508-b731-34c7b2e5cd6e-kube-api-access-nbxpn\") pod \"metallb-operator-webhook-server-868877877f-4z2fv\" (UID: \"1cc0e7f3-e154-4508-b731-34c7b2e5cd6e\") " pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.765403 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc0e7f3-e154-4508-b731-34c7b2e5cd6e-webhook-cert\") pod \"metallb-operator-webhook-server-868877877f-4z2fv\" (UID: \"1cc0e7f3-e154-4508-b731-34c7b2e5cd6e\") " pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.765437 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc0e7f3-e154-4508-b731-34c7b2e5cd6e-apiservice-cert\") pod \"metallb-operator-webhook-server-868877877f-4z2fv\" (UID: \"1cc0e7f3-e154-4508-b731-34c7b2e5cd6e\") " pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.770834 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc0e7f3-e154-4508-b731-34c7b2e5cd6e-webhook-cert\") pod \"metallb-operator-webhook-server-868877877f-4z2fv\" (UID: \"1cc0e7f3-e154-4508-b731-34c7b2e5cd6e\") " pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.771020 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc0e7f3-e154-4508-b731-34c7b2e5cd6e-apiservice-cert\") pod \"metallb-operator-webhook-server-868877877f-4z2fv\" (UID: \"1cc0e7f3-e154-4508-b731-34c7b2e5cd6e\") " pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.785738 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbxpn\" (UniqueName: \"kubernetes.io/projected/1cc0e7f3-e154-4508-b731-34c7b2e5cd6e-kube-api-access-nbxpn\") pod \"metallb-operator-webhook-server-868877877f-4z2fv\" (UID: \"1cc0e7f3-e154-4508-b731-34c7b2e5cd6e\") " pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.789406 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h"] Feb 04 11:41:20 crc kubenswrapper[4728]: W0204 11:41:20.800447 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5066fff_a329_4c7b_a70f_cee08caa3393.slice/crio-fb06d7e23cd713688bba40b4c1fc980fc80c9d684861b4cff2840f796b87cf9c WatchSource:0}: Error finding container fb06d7e23cd713688bba40b4c1fc980fc80c9d684861b4cff2840f796b87cf9c: Status 404 returned error can't find the container with id fb06d7e23cd713688bba40b4c1fc980fc80c9d684861b4cff2840f796b87cf9c Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.836506 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" event={"ID":"c5066fff-a329-4c7b-a70f-cee08caa3393","Type":"ContainerStarted","Data":"fb06d7e23cd713688bba40b4c1fc980fc80c9d684861b4cff2840f796b87cf9c"} Feb 04 11:41:20 crc kubenswrapper[4728]: I0204 11:41:20.847005 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:21 crc kubenswrapper[4728]: I0204 11:41:21.159788 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-868877877f-4z2fv"] Feb 04 11:41:21 crc kubenswrapper[4728]: W0204 11:41:21.165932 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cc0e7f3_e154_4508_b731_34c7b2e5cd6e.slice/crio-38bcdc44f1b290f4c457dec1ffbe56816e86ece8f777b09f4002a70a8b8e5953 WatchSource:0}: Error finding container 38bcdc44f1b290f4c457dec1ffbe56816e86ece8f777b09f4002a70a8b8e5953: Status 404 returned error can't find the container with id 38bcdc44f1b290f4c457dec1ffbe56816e86ece8f777b09f4002a70a8b8e5953 Feb 04 11:41:21 crc kubenswrapper[4728]: I0204 11:41:21.844367 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" event={"ID":"1cc0e7f3-e154-4508-b731-34c7b2e5cd6e","Type":"ContainerStarted","Data":"38bcdc44f1b290f4c457dec1ffbe56816e86ece8f777b09f4002a70a8b8e5953"} Feb 04 11:41:25 crc kubenswrapper[4728]: I0204 11:41:25.874666 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" event={"ID":"c5066fff-a329-4c7b-a70f-cee08caa3393","Type":"ContainerStarted","Data":"614e8f473cec9763d585310a57d1216e6ad91f97939fbf585b91aa64b73e2d48"} Feb 04 11:41:25 crc kubenswrapper[4728]: I0204 11:41:25.875174 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:41:25 crc kubenswrapper[4728]: I0204 11:41:25.876604 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" event={"ID":"1cc0e7f3-e154-4508-b731-34c7b2e5cd6e","Type":"ContainerStarted","Data":"c9870a3f266da9929bc2dd6ec0439d5a6feb32298f4ed252f725d4b2dd7b725d"} Feb 04 11:41:25 crc kubenswrapper[4728]: I0204 11:41:25.876761 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:41:25 crc kubenswrapper[4728]: I0204 11:41:25.897310 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" podStartSLOduration=1.545971161 podStartE2EDuration="5.897288593s" podCreationTimestamp="2026-02-04 11:41:20 +0000 UTC" firstStartedPulling="2026-02-04 11:41:20.808094663 +0000 UTC m=+829.950799048" lastFinishedPulling="2026-02-04 11:41:25.159412095 +0000 UTC m=+834.302116480" observedRunningTime="2026-02-04 11:41:25.892107384 +0000 UTC m=+835.034811769" watchObservedRunningTime="2026-02-04 11:41:25.897288593 +0000 UTC m=+835.039992978" Feb 04 11:41:25 crc kubenswrapper[4728]: I0204 11:41:25.908087 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" podStartSLOduration=1.7768149229999999 podStartE2EDuration="5.908069151s" podCreationTimestamp="2026-02-04 11:41:20 +0000 UTC" firstStartedPulling="2026-02-04 11:41:21.170030569 +0000 UTC m=+830.312734954" lastFinishedPulling="2026-02-04 11:41:25.301284797 +0000 UTC m=+834.443989182" observedRunningTime="2026-02-04 11:41:25.908064391 +0000 UTC m=+835.050768776" watchObservedRunningTime="2026-02-04 11:41:25.908069151 +0000 UTC m=+835.050773526" Feb 04 11:41:40 crc kubenswrapper[4728]: I0204 11:41:40.854205 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-868877877f-4z2fv" Feb 04 11:42:00 crc kubenswrapper[4728]: I0204 11:42:00.572832 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7ffd8d88fd-dfn9h" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.202188 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-wk247"] Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.204738 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.207423 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.207816 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-j6ttr" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.219917 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.227869 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf"] Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.228736 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.236318 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.240571 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf"] Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.284434 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cmpf4"] Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.285283 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48bd\" (UniqueName: \"kubernetes.io/projected/eb981a9b-06b1-47a4-aa97-3d46980d3769-kube-api-access-d48bd\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.285354 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb981a9b-06b1-47a4-aa97-3d46980d3769-frr-startup\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.285375 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-metrics\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.285467 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-frr-conf\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.285671 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-reloader\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.285768 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-frr-sockets\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.285826 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb981a9b-06b1-47a4-aa97-3d46980d3769-metrics-certs\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.286185 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.287882 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6w2b7" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.288369 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.288574 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.288833 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.299092 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-9c48fdfd-5w5qt"] Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.300098 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.305396 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.323612 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-9c48fdfd-5w5qt"] Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.386990 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb981a9b-06b1-47a4-aa97-3d46980d3769-frr-startup\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387028 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-metrics\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387047 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-frr-conf\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387070 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhhlh\" (UniqueName: \"kubernetes.io/projected/135aeb72-0473-4fa8-b594-c933ad100216-kube-api-access-rhhlh\") pod \"frr-k8s-webhook-server-97dfd4f9f-jv5kf\" (UID: \"135aeb72-0473-4fa8-b594-c933ad100216\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387096 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-reloader\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387113 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/135aeb72-0473-4fa8-b594-c933ad100216-cert\") pod \"frr-k8s-webhook-server-97dfd4f9f-jv5kf\" (UID: \"135aeb72-0473-4fa8-b594-c933ad100216\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387137 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-frr-sockets\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387188 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb981a9b-06b1-47a4-aa97-3d46980d3769-metrics-certs\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387224 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48bd\" (UniqueName: \"kubernetes.io/projected/eb981a9b-06b1-47a4-aa97-3d46980d3769-kube-api-access-d48bd\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: E0204 11:42:01.387353 4728 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 04 11:42:01 crc kubenswrapper[4728]: E0204 11:42:01.387397 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb981a9b-06b1-47a4-aa97-3d46980d3769-metrics-certs podName:eb981a9b-06b1-47a4-aa97-3d46980d3769 nodeName:}" failed. No retries permitted until 2026-02-04 11:42:01.88738516 +0000 UTC m=+871.030089545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb981a9b-06b1-47a4-aa97-3d46980d3769-metrics-certs") pod "frr-k8s-wk247" (UID: "eb981a9b-06b1-47a4-aa97-3d46980d3769") : secret "frr-k8s-certs-secret" not found Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387523 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-metrics\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387599 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-reloader\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387713 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-frr-sockets\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387872 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb981a9b-06b1-47a4-aa97-3d46980d3769-frr-startup\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.387874 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb981a9b-06b1-47a4-aa97-3d46980d3769-frr-conf\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.405563 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48bd\" (UniqueName: \"kubernetes.io/projected/eb981a9b-06b1-47a4-aa97-3d46980d3769-kube-api-access-d48bd\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.488594 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-metrics-certs\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.488651 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvsmm\" (UniqueName: \"kubernetes.io/projected/bea37505-bec7-466d-a718-00720e7102e8-kube-api-access-bvsmm\") pod \"controller-9c48fdfd-5w5qt\" (UID: \"bea37505-bec7-466d-a718-00720e7102e8\") " pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.488676 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5d51181d-348f-4581-8429-b8bbb614d0e7-metallb-excludel2\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.488702 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-memberlist\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.489339 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bea37505-bec7-466d-a718-00720e7102e8-metrics-certs\") pod \"controller-9c48fdfd-5w5qt\" (UID: \"bea37505-bec7-466d-a718-00720e7102e8\") " pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.489470 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhhlh\" (UniqueName: \"kubernetes.io/projected/135aeb72-0473-4fa8-b594-c933ad100216-kube-api-access-rhhlh\") pod \"frr-k8s-webhook-server-97dfd4f9f-jv5kf\" (UID: \"135aeb72-0473-4fa8-b594-c933ad100216\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.489516 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxvs7\" (UniqueName: \"kubernetes.io/projected/5d51181d-348f-4581-8429-b8bbb614d0e7-kube-api-access-zxvs7\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.489578 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/135aeb72-0473-4fa8-b594-c933ad100216-cert\") pod \"frr-k8s-webhook-server-97dfd4f9f-jv5kf\" (UID: \"135aeb72-0473-4fa8-b594-c933ad100216\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.489690 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bea37505-bec7-466d-a718-00720e7102e8-cert\") pod \"controller-9c48fdfd-5w5qt\" (UID: \"bea37505-bec7-466d-a718-00720e7102e8\") " pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.505286 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/135aeb72-0473-4fa8-b594-c933ad100216-cert\") pod \"frr-k8s-webhook-server-97dfd4f9f-jv5kf\" (UID: \"135aeb72-0473-4fa8-b594-c933ad100216\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.512363 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhhlh\" (UniqueName: \"kubernetes.io/projected/135aeb72-0473-4fa8-b594-c933ad100216-kube-api-access-rhhlh\") pod \"frr-k8s-webhook-server-97dfd4f9f-jv5kf\" (UID: \"135aeb72-0473-4fa8-b594-c933ad100216\") " pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.551064 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.592238 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bea37505-bec7-466d-a718-00720e7102e8-cert\") pod \"controller-9c48fdfd-5w5qt\" (UID: \"bea37505-bec7-466d-a718-00720e7102e8\") " pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.592295 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-metrics-certs\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.592335 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvsmm\" (UniqueName: \"kubernetes.io/projected/bea37505-bec7-466d-a718-00720e7102e8-kube-api-access-bvsmm\") pod \"controller-9c48fdfd-5w5qt\" (UID: \"bea37505-bec7-466d-a718-00720e7102e8\") " pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.592362 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5d51181d-348f-4581-8429-b8bbb614d0e7-metallb-excludel2\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.592388 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-memberlist\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.592419 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bea37505-bec7-466d-a718-00720e7102e8-metrics-certs\") pod \"controller-9c48fdfd-5w5qt\" (UID: \"bea37505-bec7-466d-a718-00720e7102e8\") " pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.592455 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxvs7\" (UniqueName: \"kubernetes.io/projected/5d51181d-348f-4581-8429-b8bbb614d0e7-kube-api-access-zxvs7\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: E0204 11:42:01.592741 4728 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 04 11:42:01 crc kubenswrapper[4728]: E0204 11:42:01.592806 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-memberlist podName:5d51181d-348f-4581-8429-b8bbb614d0e7 nodeName:}" failed. No retries permitted until 2026-02-04 11:42:02.092789992 +0000 UTC m=+871.235494377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-memberlist") pod "speaker-cmpf4" (UID: "5d51181d-348f-4581-8429-b8bbb614d0e7") : secret "metallb-memberlist" not found Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.593587 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5d51181d-348f-4581-8429-b8bbb614d0e7-metallb-excludel2\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.598053 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.598638 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-metrics-certs\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.599371 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bea37505-bec7-466d-a718-00720e7102e8-metrics-certs\") pod \"controller-9c48fdfd-5w5qt\" (UID: \"bea37505-bec7-466d-a718-00720e7102e8\") " pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.606201 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bea37505-bec7-466d-a718-00720e7102e8-cert\") pod \"controller-9c48fdfd-5w5qt\" (UID: \"bea37505-bec7-466d-a718-00720e7102e8\") " pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.613029 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxvs7\" (UniqueName: \"kubernetes.io/projected/5d51181d-348f-4581-8429-b8bbb614d0e7-kube-api-access-zxvs7\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.617881 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvsmm\" (UniqueName: \"kubernetes.io/projected/bea37505-bec7-466d-a718-00720e7102e8-kube-api-access-bvsmm\") pod \"controller-9c48fdfd-5w5qt\" (UID: \"bea37505-bec7-466d-a718-00720e7102e8\") " pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.620474 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.898785 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb981a9b-06b1-47a4-aa97-3d46980d3769-metrics-certs\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:01 crc kubenswrapper[4728]: I0204 11:42:01.901931 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb981a9b-06b1-47a4-aa97-3d46980d3769-metrics-certs\") pod \"frr-k8s-wk247\" (UID: \"eb981a9b-06b1-47a4-aa97-3d46980d3769\") " pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:02 crc kubenswrapper[4728]: W0204 11:42:02.030192 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod135aeb72_0473_4fa8_b594_c933ad100216.slice/crio-59bf8c6a017e0f9f9beecaca38a3eafeb3216bee8790f64b3c3b2a9599167adb WatchSource:0}: Error finding container 59bf8c6a017e0f9f9beecaca38a3eafeb3216bee8790f64b3c3b2a9599167adb: Status 404 returned error can't find the container with id 59bf8c6a017e0f9f9beecaca38a3eafeb3216bee8790f64b3c3b2a9599167adb Feb 04 11:42:02 crc kubenswrapper[4728]: I0204 11:42:02.033088 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf"] Feb 04 11:42:02 crc kubenswrapper[4728]: I0204 11:42:02.062284 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" event={"ID":"135aeb72-0473-4fa8-b594-c933ad100216","Type":"ContainerStarted","Data":"59bf8c6a017e0f9f9beecaca38a3eafeb3216bee8790f64b3c3b2a9599167adb"} Feb 04 11:42:02 crc kubenswrapper[4728]: I0204 11:42:02.071355 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-9c48fdfd-5w5qt"] Feb 04 11:42:02 crc kubenswrapper[4728]: W0204 11:42:02.074911 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbea37505_bec7_466d_a718_00720e7102e8.slice/crio-7c9350bf964b38dc20b00e4d690a24f2c0d0d1053f7ad2bf1e76fffa037e65a5 WatchSource:0}: Error finding container 7c9350bf964b38dc20b00e4d690a24f2c0d0d1053f7ad2bf1e76fffa037e65a5: Status 404 returned error can't find the container with id 7c9350bf964b38dc20b00e4d690a24f2c0d0d1053f7ad2bf1e76fffa037e65a5 Feb 04 11:42:02 crc kubenswrapper[4728]: I0204 11:42:02.102132 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-memberlist\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:02 crc kubenswrapper[4728]: E0204 11:42:02.102476 4728 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 04 11:42:02 crc kubenswrapper[4728]: E0204 11:42:02.102908 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-memberlist podName:5d51181d-348f-4581-8429-b8bbb614d0e7 nodeName:}" failed. No retries permitted until 2026-02-04 11:42:03.102887554 +0000 UTC m=+872.245591939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-memberlist") pod "speaker-cmpf4" (UID: "5d51181d-348f-4581-8429-b8bbb614d0e7") : secret "metallb-memberlist" not found Feb 04 11:42:02 crc kubenswrapper[4728]: I0204 11:42:02.120103 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:03 crc kubenswrapper[4728]: I0204 11:42:03.074608 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wk247" event={"ID":"eb981a9b-06b1-47a4-aa97-3d46980d3769","Type":"ContainerStarted","Data":"84eb596e99bee9d5541a75a7f4e188d1b4091892e59407bb6c8cdc3a7917fbfd"} Feb 04 11:42:03 crc kubenswrapper[4728]: I0204 11:42:03.078963 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-9c48fdfd-5w5qt" event={"ID":"bea37505-bec7-466d-a718-00720e7102e8","Type":"ContainerStarted","Data":"c0a6a4ddd1f2d25b62ca140b5ff26e17b108c638b65a83fbc02c2c30c212d711"} Feb 04 11:42:03 crc kubenswrapper[4728]: I0204 11:42:03.078984 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-9c48fdfd-5w5qt" event={"ID":"bea37505-bec7-466d-a718-00720e7102e8","Type":"ContainerStarted","Data":"8dcd6ea35594fab8ca38f731a155afb8b72a0729fc74d0c5ca74ef02dd6c7fc0"} Feb 04 11:42:03 crc kubenswrapper[4728]: I0204 11:42:03.078994 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-9c48fdfd-5w5qt" event={"ID":"bea37505-bec7-466d-a718-00720e7102e8","Type":"ContainerStarted","Data":"7c9350bf964b38dc20b00e4d690a24f2c0d0d1053f7ad2bf1e76fffa037e65a5"} Feb 04 11:42:03 crc kubenswrapper[4728]: I0204 11:42:03.079147 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:03 crc kubenswrapper[4728]: I0204 11:42:03.101628 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-9c48fdfd-5w5qt" podStartSLOduration=2.101608133 podStartE2EDuration="2.101608133s" podCreationTimestamp="2026-02-04 11:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:42:03.096189674 +0000 UTC m=+872.238894069" watchObservedRunningTime="2026-02-04 11:42:03.101608133 +0000 UTC m=+872.244312518" Feb 04 11:42:03 crc kubenswrapper[4728]: I0204 11:42:03.115958 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-memberlist\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:03 crc kubenswrapper[4728]: I0204 11:42:03.121468 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5d51181d-348f-4581-8429-b8bbb614d0e7-memberlist\") pod \"speaker-cmpf4\" (UID: \"5d51181d-348f-4581-8429-b8bbb614d0e7\") " pod="metallb-system/speaker-cmpf4" Feb 04 11:42:03 crc kubenswrapper[4728]: I0204 11:42:03.412941 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cmpf4" Feb 04 11:42:03 crc kubenswrapper[4728]: W0204 11:42:03.434071 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d51181d_348f_4581_8429_b8bbb614d0e7.slice/crio-7d74e32ed0e32f2b84ac90c585cab30ec350d5815620d96204f00abed62fc9ea WatchSource:0}: Error finding container 7d74e32ed0e32f2b84ac90c585cab30ec350d5815620d96204f00abed62fc9ea: Status 404 returned error can't find the container with id 7d74e32ed0e32f2b84ac90c585cab30ec350d5815620d96204f00abed62fc9ea Feb 04 11:42:04 crc kubenswrapper[4728]: I0204 11:42:04.086600 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cmpf4" event={"ID":"5d51181d-348f-4581-8429-b8bbb614d0e7","Type":"ContainerStarted","Data":"0c5815f143cd2d1bdb0c13d8a4ffe3072f239f2ebb584b6a40050b6be0c79f17"} Feb 04 11:42:04 crc kubenswrapper[4728]: I0204 11:42:04.086969 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cmpf4" event={"ID":"5d51181d-348f-4581-8429-b8bbb614d0e7","Type":"ContainerStarted","Data":"a6ba27130d87a071db37c7c28c6992ab3c10d5b1e579d12edf3664b63eaa0815"} Feb 04 11:42:04 crc kubenswrapper[4728]: I0204 11:42:04.086985 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cmpf4" event={"ID":"5d51181d-348f-4581-8429-b8bbb614d0e7","Type":"ContainerStarted","Data":"7d74e32ed0e32f2b84ac90c585cab30ec350d5815620d96204f00abed62fc9ea"} Feb 04 11:42:04 crc kubenswrapper[4728]: I0204 11:42:04.087206 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cmpf4" Feb 04 11:42:10 crc kubenswrapper[4728]: I0204 11:42:10.127673 4728 generic.go:334] "Generic (PLEG): container finished" podID="eb981a9b-06b1-47a4-aa97-3d46980d3769" containerID="70d8909af128a4d6108ecff1675262298432e6a6de585093baeeee767eb4c9c0" exitCode=0 Feb 04 11:42:10 crc kubenswrapper[4728]: I0204 11:42:10.127793 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wk247" event={"ID":"eb981a9b-06b1-47a4-aa97-3d46980d3769","Type":"ContainerDied","Data":"70d8909af128a4d6108ecff1675262298432e6a6de585093baeeee767eb4c9c0"} Feb 04 11:42:10 crc kubenswrapper[4728]: I0204 11:42:10.129275 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" event={"ID":"135aeb72-0473-4fa8-b594-c933ad100216","Type":"ContainerStarted","Data":"13ca22af162bd0f69c182bc703f8165dc3595e6e7c0364309f077044a02fb000"} Feb 04 11:42:10 crc kubenswrapper[4728]: I0204 11:42:10.129430 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" Feb 04 11:42:10 crc kubenswrapper[4728]: I0204 11:42:10.149487 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cmpf4" podStartSLOduration=9.149471495 podStartE2EDuration="9.149471495s" podCreationTimestamp="2026-02-04 11:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:42:04.116412119 +0000 UTC m=+873.259116504" watchObservedRunningTime="2026-02-04 11:42:10.149471495 +0000 UTC m=+879.292175870" Feb 04 11:42:10 crc kubenswrapper[4728]: I0204 11:42:10.164291 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" podStartSLOduration=1.9285226949999998 podStartE2EDuration="9.164274653s" podCreationTimestamp="2026-02-04 11:42:01 +0000 UTC" firstStartedPulling="2026-02-04 11:42:02.033097246 +0000 UTC m=+871.175801631" lastFinishedPulling="2026-02-04 11:42:09.268849204 +0000 UTC m=+878.411553589" observedRunningTime="2026-02-04 11:42:10.159437779 +0000 UTC m=+879.302142164" watchObservedRunningTime="2026-02-04 11:42:10.164274653 +0000 UTC m=+879.306979038" Feb 04 11:42:11 crc kubenswrapper[4728]: I0204 11:42:11.137778 4728 generic.go:334] "Generic (PLEG): container finished" podID="eb981a9b-06b1-47a4-aa97-3d46980d3769" containerID="faa2ebaa071364abeda39b52db6ad4481adde46dda547381e74a9ee8452009aa" exitCode=0 Feb 04 11:42:11 crc kubenswrapper[4728]: I0204 11:42:11.137823 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wk247" event={"ID":"eb981a9b-06b1-47a4-aa97-3d46980d3769","Type":"ContainerDied","Data":"faa2ebaa071364abeda39b52db6ad4481adde46dda547381e74a9ee8452009aa"} Feb 04 11:42:12 crc kubenswrapper[4728]: I0204 11:42:12.154496 4728 generic.go:334] "Generic (PLEG): container finished" podID="eb981a9b-06b1-47a4-aa97-3d46980d3769" containerID="9e84464cfabb6e22a86358ab07b444c5823a55cddf82b06d6a5839d477761eca" exitCode=0 Feb 04 11:42:12 crc kubenswrapper[4728]: I0204 11:42:12.154697 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wk247" event={"ID":"eb981a9b-06b1-47a4-aa97-3d46980d3769","Type":"ContainerDied","Data":"9e84464cfabb6e22a86358ab07b444c5823a55cddf82b06d6a5839d477761eca"} Feb 04 11:42:13 crc kubenswrapper[4728]: I0204 11:42:13.163994 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wk247" event={"ID":"eb981a9b-06b1-47a4-aa97-3d46980d3769","Type":"ContainerStarted","Data":"a03810d1b4230c81e1e59444aec773d4740d31d7bcfcc07f6077abe5ba9190ba"} Feb 04 11:42:13 crc kubenswrapper[4728]: I0204 11:42:13.164260 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wk247" event={"ID":"eb981a9b-06b1-47a4-aa97-3d46980d3769","Type":"ContainerStarted","Data":"cd5eb60f0afe6a76ca945587978269782497437bcbb61a151f7172d9c3df9a97"} Feb 04 11:42:13 crc kubenswrapper[4728]: I0204 11:42:13.164271 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wk247" event={"ID":"eb981a9b-06b1-47a4-aa97-3d46980d3769","Type":"ContainerStarted","Data":"acef37bc1760a19640d3baedd75a25df6c4279e41e4702e58f1c358a57256561"} Feb 04 11:42:13 crc kubenswrapper[4728]: I0204 11:42:13.164280 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wk247" event={"ID":"eb981a9b-06b1-47a4-aa97-3d46980d3769","Type":"ContainerStarted","Data":"81bf08b513dee6ec2f2c42c85c624db2ffd8f102e86886d098c3c006be2adc12"} Feb 04 11:42:13 crc kubenswrapper[4728]: I0204 11:42:13.164291 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wk247" event={"ID":"eb981a9b-06b1-47a4-aa97-3d46980d3769","Type":"ContainerStarted","Data":"1a344f890a180b8f3edbd6ccbb7b322198d37f114df32799f5ce802937a1c96d"} Feb 04 11:42:13 crc kubenswrapper[4728]: I0204 11:42:13.419145 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cmpf4" Feb 04 11:42:14 crc kubenswrapper[4728]: I0204 11:42:14.173147 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wk247" event={"ID":"eb981a9b-06b1-47a4-aa97-3d46980d3769","Type":"ContainerStarted","Data":"ff651ba72c5854cb122f69362075a69322347a7c5ddceb5f975b6cf2ea20cd08"} Feb 04 11:42:14 crc kubenswrapper[4728]: I0204 11:42:14.173443 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:14 crc kubenswrapper[4728]: I0204 11:42:14.198005 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-wk247" podStartSLOduration=6.156084751 podStartE2EDuration="13.197984862s" podCreationTimestamp="2026-02-04 11:42:01 +0000 UTC" firstStartedPulling="2026-02-04 11:42:02.206614276 +0000 UTC m=+871.349318661" lastFinishedPulling="2026-02-04 11:42:09.248514387 +0000 UTC m=+878.391218772" observedRunningTime="2026-02-04 11:42:14.192449721 +0000 UTC m=+883.335154116" watchObservedRunningTime="2026-02-04 11:42:14.197984862 +0000 UTC m=+883.340689237" Feb 04 11:42:16 crc kubenswrapper[4728]: I0204 11:42:16.017799 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-24qnj"] Feb 04 11:42:16 crc kubenswrapper[4728]: I0204 11:42:16.018689 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-24qnj" Feb 04 11:42:16 crc kubenswrapper[4728]: I0204 11:42:16.020577 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 04 11:42:16 crc kubenswrapper[4728]: I0204 11:42:16.021171 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5phhc" Feb 04 11:42:16 crc kubenswrapper[4728]: I0204 11:42:16.021708 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 04 11:42:16 crc kubenswrapper[4728]: I0204 11:42:16.029484 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-24qnj"] Feb 04 11:42:16 crc kubenswrapper[4728]: I0204 11:42:16.084871 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmr9m\" (UniqueName: \"kubernetes.io/projected/b09b1db0-c635-46ed-871a-a56969287944-kube-api-access-gmr9m\") pod \"openstack-operator-index-24qnj\" (UID: \"b09b1db0-c635-46ed-871a-a56969287944\") " pod="openstack-operators/openstack-operator-index-24qnj" Feb 04 11:42:16 crc kubenswrapper[4728]: I0204 11:42:16.186021 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmr9m\" (UniqueName: \"kubernetes.io/projected/b09b1db0-c635-46ed-871a-a56969287944-kube-api-access-gmr9m\") pod \"openstack-operator-index-24qnj\" (UID: \"b09b1db0-c635-46ed-871a-a56969287944\") " pod="openstack-operators/openstack-operator-index-24qnj" Feb 04 11:42:16 crc kubenswrapper[4728]: I0204 11:42:16.204206 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmr9m\" (UniqueName: \"kubernetes.io/projected/b09b1db0-c635-46ed-871a-a56969287944-kube-api-access-gmr9m\") pod \"openstack-operator-index-24qnj\" (UID: \"b09b1db0-c635-46ed-871a-a56969287944\") " pod="openstack-operators/openstack-operator-index-24qnj" Feb 04 11:42:16 crc kubenswrapper[4728]: I0204 11:42:16.338189 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-24qnj" Feb 04 11:42:16 crc kubenswrapper[4728]: I0204 11:42:16.726078 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-24qnj"] Feb 04 11:42:16 crc kubenswrapper[4728]: W0204 11:42:16.732509 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb09b1db0_c635_46ed_871a_a56969287944.slice/crio-98e110b4a5bbdcb53307c21598c243940ec8104e29e52a65cc622d6aec3f2294 WatchSource:0}: Error finding container 98e110b4a5bbdcb53307c21598c243940ec8104e29e52a65cc622d6aec3f2294: Status 404 returned error can't find the container with id 98e110b4a5bbdcb53307c21598c243940ec8104e29e52a65cc622d6aec3f2294 Feb 04 11:42:17 crc kubenswrapper[4728]: I0204 11:42:17.121414 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:17 crc kubenswrapper[4728]: I0204 11:42:17.171613 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:17 crc kubenswrapper[4728]: I0204 11:42:17.210812 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-24qnj" event={"ID":"b09b1db0-c635-46ed-871a-a56969287944","Type":"ContainerStarted","Data":"98e110b4a5bbdcb53307c21598c243940ec8104e29e52a65cc622d6aec3f2294"} Feb 04 11:42:19 crc kubenswrapper[4728]: I0204 11:42:19.398114 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-24qnj"] Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.002514 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-24b52"] Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.003180 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-24b52" Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.016831 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-24b52"] Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.044496 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkmvq\" (UniqueName: \"kubernetes.io/projected/4c417533-a48b-4aaf-a428-6844c84b9845-kube-api-access-gkmvq\") pod \"openstack-operator-index-24b52\" (UID: \"4c417533-a48b-4aaf-a428-6844c84b9845\") " pod="openstack-operators/openstack-operator-index-24b52" Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.145354 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkmvq\" (UniqueName: \"kubernetes.io/projected/4c417533-a48b-4aaf-a428-6844c84b9845-kube-api-access-gkmvq\") pod \"openstack-operator-index-24b52\" (UID: \"4c417533-a48b-4aaf-a428-6844c84b9845\") " pod="openstack-operators/openstack-operator-index-24b52" Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.169042 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkmvq\" (UniqueName: \"kubernetes.io/projected/4c417533-a48b-4aaf-a428-6844c84b9845-kube-api-access-gkmvq\") pod \"openstack-operator-index-24b52\" (UID: \"4c417533-a48b-4aaf-a428-6844c84b9845\") " pod="openstack-operators/openstack-operator-index-24b52" Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.231178 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-24qnj" event={"ID":"b09b1db0-c635-46ed-871a-a56969287944","Type":"ContainerStarted","Data":"ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288"} Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.231362 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-24qnj" podUID="b09b1db0-c635-46ed-871a-a56969287944" containerName="registry-server" containerID="cri-o://ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288" gracePeriod=2 Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.248020 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-24qnj" podStartSLOduration=1.439875464 podStartE2EDuration="4.247998129s" podCreationTimestamp="2026-02-04 11:42:16 +0000 UTC" firstStartedPulling="2026-02-04 11:42:16.734118218 +0000 UTC m=+885.876822603" lastFinishedPulling="2026-02-04 11:42:19.542240883 +0000 UTC m=+888.684945268" observedRunningTime="2026-02-04 11:42:20.244764107 +0000 UTC m=+889.387468492" watchObservedRunningTime="2026-02-04 11:42:20.247998129 +0000 UTC m=+889.390702514" Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.316809 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-24b52" Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.568016 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-24qnj" Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.753247 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmr9m\" (UniqueName: \"kubernetes.io/projected/b09b1db0-c635-46ed-871a-a56969287944-kube-api-access-gmr9m\") pod \"b09b1db0-c635-46ed-871a-a56969287944\" (UID: \"b09b1db0-c635-46ed-871a-a56969287944\") " Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.761519 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b09b1db0-c635-46ed-871a-a56969287944-kube-api-access-gmr9m" (OuterVolumeSpecName: "kube-api-access-gmr9m") pod "b09b1db0-c635-46ed-871a-a56969287944" (UID: "b09b1db0-c635-46ed-871a-a56969287944"). InnerVolumeSpecName "kube-api-access-gmr9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.763411 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-24b52"] Feb 04 11:42:20 crc kubenswrapper[4728]: W0204 11:42:20.775192 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c417533_a48b_4aaf_a428_6844c84b9845.slice/crio-77fe348e0a8df7cee0a40e16fbc1a94fc813a9e4e0d028e52aa12d439a354b4f WatchSource:0}: Error finding container 77fe348e0a8df7cee0a40e16fbc1a94fc813a9e4e0d028e52aa12d439a354b4f: Status 404 returned error can't find the container with id 77fe348e0a8df7cee0a40e16fbc1a94fc813a9e4e0d028e52aa12d439a354b4f Feb 04 11:42:20 crc kubenswrapper[4728]: I0204 11:42:20.856579 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmr9m\" (UniqueName: \"kubernetes.io/projected/b09b1db0-c635-46ed-871a-a56969287944-kube-api-access-gmr9m\") on node \"crc\" DevicePath \"\"" Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.238926 4728 generic.go:334] "Generic (PLEG): container finished" podID="b09b1db0-c635-46ed-871a-a56969287944" containerID="ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288" exitCode=0 Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.238972 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-24qnj" Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.238992 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-24qnj" event={"ID":"b09b1db0-c635-46ed-871a-a56969287944","Type":"ContainerDied","Data":"ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288"} Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.239506 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-24qnj" event={"ID":"b09b1db0-c635-46ed-871a-a56969287944","Type":"ContainerDied","Data":"98e110b4a5bbdcb53307c21598c243940ec8104e29e52a65cc622d6aec3f2294"} Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.239526 4728 scope.go:117] "RemoveContainer" containerID="ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288" Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.241526 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-24b52" event={"ID":"4c417533-a48b-4aaf-a428-6844c84b9845","Type":"ContainerStarted","Data":"c140df3ecb8d8f97d7b22a6d603d724a18580f24f5b77f57d529d729d40bc560"} Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.241554 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-24b52" event={"ID":"4c417533-a48b-4aaf-a428-6844c84b9845","Type":"ContainerStarted","Data":"77fe348e0a8df7cee0a40e16fbc1a94fc813a9e4e0d028e52aa12d439a354b4f"} Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.256700 4728 scope.go:117] "RemoveContainer" containerID="ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288" Feb 04 11:42:21 crc kubenswrapper[4728]: E0204 11:42:21.257200 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288\": container with ID starting with ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288 not found: ID does not exist" containerID="ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288" Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.257256 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288"} err="failed to get container status \"ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288\": rpc error: code = NotFound desc = could not find container \"ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288\": container with ID starting with ad9b5e1f473a73af568e479bacf91d87f44a0cb96e6d103f903498440a7cd288 not found: ID does not exist" Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.264038 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-24b52" podStartSLOduration=2.20563382 podStartE2EDuration="2.264015767s" podCreationTimestamp="2026-02-04 11:42:19 +0000 UTC" firstStartedPulling="2026-02-04 11:42:20.776819768 +0000 UTC m=+889.919524153" lastFinishedPulling="2026-02-04 11:42:20.835201715 +0000 UTC m=+889.977906100" observedRunningTime="2026-02-04 11:42:21.257052279 +0000 UTC m=+890.399756724" watchObservedRunningTime="2026-02-04 11:42:21.264015767 +0000 UTC m=+890.406720152" Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.272210 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-24qnj"] Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.275942 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-24qnj"] Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.564669 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b09b1db0-c635-46ed-871a-a56969287944" path="/var/lib/kubelet/pods/b09b1db0-c635-46ed-871a-a56969287944/volumes" Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.565364 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-97dfd4f9f-jv5kf" Feb 04 11:42:21 crc kubenswrapper[4728]: I0204 11:42:21.624097 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-9c48fdfd-5w5qt" Feb 04 11:42:22 crc kubenswrapper[4728]: I0204 11:42:22.124598 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-wk247" Feb 04 11:42:30 crc kubenswrapper[4728]: I0204 11:42:30.317352 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-24b52" Feb 04 11:42:30 crc kubenswrapper[4728]: I0204 11:42:30.318018 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-24b52" Feb 04 11:42:30 crc kubenswrapper[4728]: I0204 11:42:30.347726 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-24b52" Feb 04 11:42:30 crc kubenswrapper[4728]: I0204 11:42:30.476558 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-24b52" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.499345 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw"] Feb 04 11:42:31 crc kubenswrapper[4728]: E0204 11:42:31.499903 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09b1db0-c635-46ed-871a-a56969287944" containerName="registry-server" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.499917 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09b1db0-c635-46ed-871a-a56969287944" containerName="registry-server" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.500032 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b09b1db0-c635-46ed-871a-a56969287944" containerName="registry-server" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.500846 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.504800 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-lmfj6" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.539951 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw"] Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.600163 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-util\") pod \"a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.600244 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-bundle\") pod \"a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.600461 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdq7s\" (UniqueName: \"kubernetes.io/projected/a5f60744-0d28-4cbf-978a-f3cc15df91cf-kube-api-access-gdq7s\") pod \"a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.701451 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdq7s\" (UniqueName: \"kubernetes.io/projected/a5f60744-0d28-4cbf-978a-f3cc15df91cf-kube-api-access-gdq7s\") pod \"a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.701615 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-util\") pod \"a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.701653 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-bundle\") pod \"a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.702166 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-util\") pod \"a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.702340 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-bundle\") pod \"a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.720409 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdq7s\" (UniqueName: \"kubernetes.io/projected/a5f60744-0d28-4cbf-978a-f3cc15df91cf-kube-api-access-gdq7s\") pod \"a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:31 crc kubenswrapper[4728]: I0204 11:42:31.820975 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:32 crc kubenswrapper[4728]: I0204 11:42:32.289011 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw"] Feb 04 11:42:32 crc kubenswrapper[4728]: W0204 11:42:32.298893 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5f60744_0d28_4cbf_978a_f3cc15df91cf.slice/crio-92e5991e32c8afd8454898e8148cb015864c83351dbe958e1af63e75b31d8a28 WatchSource:0}: Error finding container 92e5991e32c8afd8454898e8148cb015864c83351dbe958e1af63e75b31d8a28: Status 404 returned error can't find the container with id 92e5991e32c8afd8454898e8148cb015864c83351dbe958e1af63e75b31d8a28 Feb 04 11:42:32 crc kubenswrapper[4728]: I0204 11:42:32.467635 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" event={"ID":"a5f60744-0d28-4cbf-978a-f3cc15df91cf","Type":"ContainerStarted","Data":"a3aabc093b6cd7bc717ce76ee54bbaef7c9150df01e204121b85388130b464ff"} Feb 04 11:42:32 crc kubenswrapper[4728]: I0204 11:42:32.467844 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" event={"ID":"a5f60744-0d28-4cbf-978a-f3cc15df91cf","Type":"ContainerStarted","Data":"92e5991e32c8afd8454898e8148cb015864c83351dbe958e1af63e75b31d8a28"} Feb 04 11:42:33 crc kubenswrapper[4728]: I0204 11:42:33.476323 4728 generic.go:334] "Generic (PLEG): container finished" podID="a5f60744-0d28-4cbf-978a-f3cc15df91cf" containerID="a3aabc093b6cd7bc717ce76ee54bbaef7c9150df01e204121b85388130b464ff" exitCode=0 Feb 04 11:42:33 crc kubenswrapper[4728]: I0204 11:42:33.476384 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" event={"ID":"a5f60744-0d28-4cbf-978a-f3cc15df91cf","Type":"ContainerDied","Data":"a3aabc093b6cd7bc717ce76ee54bbaef7c9150df01e204121b85388130b464ff"} Feb 04 11:42:34 crc kubenswrapper[4728]: I0204 11:42:34.483296 4728 generic.go:334] "Generic (PLEG): container finished" podID="a5f60744-0d28-4cbf-978a-f3cc15df91cf" containerID="bdcd6ae3bbf4a57e49257ab629be6de53a42c8673656ae701f1a53884f429dbd" exitCode=0 Feb 04 11:42:34 crc kubenswrapper[4728]: I0204 11:42:34.483396 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" event={"ID":"a5f60744-0d28-4cbf-978a-f3cc15df91cf","Type":"ContainerDied","Data":"bdcd6ae3bbf4a57e49257ab629be6de53a42c8673656ae701f1a53884f429dbd"} Feb 04 11:42:35 crc kubenswrapper[4728]: I0204 11:42:35.493781 4728 generic.go:334] "Generic (PLEG): container finished" podID="a5f60744-0d28-4cbf-978a-f3cc15df91cf" containerID="9567b5cc9ae4b97c6094409e69a065efea59a7d83bc54b57f526b6f6d0fc8a5a" exitCode=0 Feb 04 11:42:35 crc kubenswrapper[4728]: I0204 11:42:35.493859 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" event={"ID":"a5f60744-0d28-4cbf-978a-f3cc15df91cf","Type":"ContainerDied","Data":"9567b5cc9ae4b97c6094409e69a065efea59a7d83bc54b57f526b6f6d0fc8a5a"} Feb 04 11:42:36 crc kubenswrapper[4728]: I0204 11:42:36.797300 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:36 crc kubenswrapper[4728]: I0204 11:42:36.976611 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdq7s\" (UniqueName: \"kubernetes.io/projected/a5f60744-0d28-4cbf-978a-f3cc15df91cf-kube-api-access-gdq7s\") pod \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " Feb 04 11:42:36 crc kubenswrapper[4728]: I0204 11:42:36.976938 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-bundle\") pod \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " Feb 04 11:42:36 crc kubenswrapper[4728]: I0204 11:42:36.977005 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-util\") pod \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\" (UID: \"a5f60744-0d28-4cbf-978a-f3cc15df91cf\") " Feb 04 11:42:36 crc kubenswrapper[4728]: I0204 11:42:36.978098 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-bundle" (OuterVolumeSpecName: "bundle") pod "a5f60744-0d28-4cbf-978a-f3cc15df91cf" (UID: "a5f60744-0d28-4cbf-978a-f3cc15df91cf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:42:36 crc kubenswrapper[4728]: I0204 11:42:36.986310 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f60744-0d28-4cbf-978a-f3cc15df91cf-kube-api-access-gdq7s" (OuterVolumeSpecName: "kube-api-access-gdq7s") pod "a5f60744-0d28-4cbf-978a-f3cc15df91cf" (UID: "a5f60744-0d28-4cbf-978a-f3cc15df91cf"). InnerVolumeSpecName "kube-api-access-gdq7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:42:36 crc kubenswrapper[4728]: I0204 11:42:36.991981 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-util" (OuterVolumeSpecName: "util") pod "a5f60744-0d28-4cbf-978a-f3cc15df91cf" (UID: "a5f60744-0d28-4cbf-978a-f3cc15df91cf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:42:37 crc kubenswrapper[4728]: I0204 11:42:37.078934 4728 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:42:37 crc kubenswrapper[4728]: I0204 11:42:37.079009 4728 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5f60744-0d28-4cbf-978a-f3cc15df91cf-util\") on node \"crc\" DevicePath \"\"" Feb 04 11:42:37 crc kubenswrapper[4728]: I0204 11:42:37.079023 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdq7s\" (UniqueName: \"kubernetes.io/projected/a5f60744-0d28-4cbf-978a-f3cc15df91cf-kube-api-access-gdq7s\") on node \"crc\" DevicePath \"\"" Feb 04 11:42:37 crc kubenswrapper[4728]: I0204 11:42:37.521427 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" event={"ID":"a5f60744-0d28-4cbf-978a-f3cc15df91cf","Type":"ContainerDied","Data":"92e5991e32c8afd8454898e8148cb015864c83351dbe958e1af63e75b31d8a28"} Feb 04 11:42:37 crc kubenswrapper[4728]: I0204 11:42:37.521471 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92e5991e32c8afd8454898e8148cb015864c83351dbe958e1af63e75b31d8a28" Feb 04 11:42:37 crc kubenswrapper[4728]: I0204 11:42:37.521510 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw" Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.470735 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg"] Feb 04 11:42:44 crc kubenswrapper[4728]: E0204 11:42:44.471541 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f60744-0d28-4cbf-978a-f3cc15df91cf" containerName="pull" Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.471556 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f60744-0d28-4cbf-978a-f3cc15df91cf" containerName="pull" Feb 04 11:42:44 crc kubenswrapper[4728]: E0204 11:42:44.471571 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f60744-0d28-4cbf-978a-f3cc15df91cf" containerName="util" Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.471578 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f60744-0d28-4cbf-978a-f3cc15df91cf" containerName="util" Feb 04 11:42:44 crc kubenswrapper[4728]: E0204 11:42:44.471598 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f60744-0d28-4cbf-978a-f3cc15df91cf" containerName="extract" Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.471605 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f60744-0d28-4cbf-978a-f3cc15df91cf" containerName="extract" Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.471732 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f60744-0d28-4cbf-978a-f3cc15df91cf" containerName="extract" Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.472239 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg" Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.474968 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-k4vsb" Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.491232 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg"] Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.579598 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj8rh\" (UniqueName: \"kubernetes.io/projected/97827c0e-98ed-4486-9ee8-918dc6df645b-kube-api-access-cj8rh\") pod \"openstack-operator-controller-init-5957c4869f-7wljg\" (UID: \"97827c0e-98ed-4486-9ee8-918dc6df645b\") " pod="openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg" Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.681421 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj8rh\" (UniqueName: \"kubernetes.io/projected/97827c0e-98ed-4486-9ee8-918dc6df645b-kube-api-access-cj8rh\") pod \"openstack-operator-controller-init-5957c4869f-7wljg\" (UID: \"97827c0e-98ed-4486-9ee8-918dc6df645b\") " pod="openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg" Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.710021 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj8rh\" (UniqueName: \"kubernetes.io/projected/97827c0e-98ed-4486-9ee8-918dc6df645b-kube-api-access-cj8rh\") pod \"openstack-operator-controller-init-5957c4869f-7wljg\" (UID: \"97827c0e-98ed-4486-9ee8-918dc6df645b\") " pod="openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg" Feb 04 11:42:44 crc kubenswrapper[4728]: I0204 11:42:44.791791 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg" Feb 04 11:42:45 crc kubenswrapper[4728]: I0204 11:42:45.005927 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg"] Feb 04 11:42:45 crc kubenswrapper[4728]: W0204 11:42:45.015347 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97827c0e_98ed_4486_9ee8_918dc6df645b.slice/crio-13d3d86bfd356dad5101fd668bf98e95dd3860e19ae0d6aed32f45faa65bc98f WatchSource:0}: Error finding container 13d3d86bfd356dad5101fd668bf98e95dd3860e19ae0d6aed32f45faa65bc98f: Status 404 returned error can't find the container with id 13d3d86bfd356dad5101fd668bf98e95dd3860e19ae0d6aed32f45faa65bc98f Feb 04 11:42:45 crc kubenswrapper[4728]: I0204 11:42:45.574495 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg" event={"ID":"97827c0e-98ed-4486-9ee8-918dc6df645b","Type":"ContainerStarted","Data":"13d3d86bfd356dad5101fd668bf98e95dd3860e19ae0d6aed32f45faa65bc98f"} Feb 04 11:42:49 crc kubenswrapper[4728]: I0204 11:42:49.603323 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg" event={"ID":"97827c0e-98ed-4486-9ee8-918dc6df645b","Type":"ContainerStarted","Data":"4bd85805b14a56168988d389b3cf6e4dde1738b617b625184de815e8d2dd0fdd"} Feb 04 11:42:49 crc kubenswrapper[4728]: I0204 11:42:49.603852 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg" Feb 04 11:42:49 crc kubenswrapper[4728]: I0204 11:42:49.636649 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg" podStartSLOduration=2.001704957 podStartE2EDuration="5.63663112s" podCreationTimestamp="2026-02-04 11:42:44 +0000 UTC" firstStartedPulling="2026-02-04 11:42:45.017612152 +0000 UTC m=+914.160316537" lastFinishedPulling="2026-02-04 11:42:48.652538315 +0000 UTC m=+917.795242700" observedRunningTime="2026-02-04 11:42:49.626868452 +0000 UTC m=+918.769572857" watchObservedRunningTime="2026-02-04 11:42:49.63663112 +0000 UTC m=+918.779335525" Feb 04 11:42:54 crc kubenswrapper[4728]: I0204 11:42:54.794425 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5957c4869f-7wljg" Feb 04 11:43:05 crc kubenswrapper[4728]: I0204 11:43:05.448194 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:43:05 crc kubenswrapper[4728]: I0204 11:43:05.448634 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.237005 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.238909 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.244901 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-j288x" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.258086 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.277660 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.279111 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.295303 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.296858 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.310583 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ffptc" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.311658 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-wzrzc" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.314818 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.327662 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.340535 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.341358 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.344256 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2klc7" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.348724 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cvfp\" (UniqueName: \"kubernetes.io/projected/e8b0005b-18c6-4701-b22f-41d0127becf7-kube-api-access-9cvfp\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-b5scl\" (UID: \"e8b0005b-18c6-4701-b22f-41d0127becf7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.348817 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdsqd\" (UniqueName: \"kubernetes.io/projected/0760f0c3-0076-4be3-8b2e-2dc9fcf0d929-kube-api-access-wdsqd\") pod \"cinder-operator-controller-manager-8d874c8fc-57ztz\" (UID: \"0760f0c3-0076-4be3-8b2e-2dc9fcf0d929\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.366519 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.368997 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.375900 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2nvh5" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.376074 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.376812 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.387372 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-78rpz"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.392926 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.397443 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hqj55" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.397662 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zwn6n" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.397769 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.401447 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.411015 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.420846 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.424571 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-78rpz"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.430901 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.433980 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.436386 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-h2wmn" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.447454 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.450379 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncwgq\" (UniqueName: \"kubernetes.io/projected/b6c7167f-86c2-4e7e-8699-24f3932124ab-kube-api-access-ncwgq\") pod \"infra-operator-controller-manager-79955696d6-78rpz\" (UID: \"b6c7167f-86c2-4e7e-8699-24f3932124ab\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.450458 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdsqd\" (UniqueName: \"kubernetes.io/projected/0760f0c3-0076-4be3-8b2e-2dc9fcf0d929-kube-api-access-wdsqd\") pod \"cinder-operator-controller-manager-8d874c8fc-57ztz\" (UID: \"0760f0c3-0076-4be3-8b2e-2dc9fcf0d929\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.450501 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert\") pod \"infra-operator-controller-manager-79955696d6-78rpz\" (UID: \"b6c7167f-86c2-4e7e-8699-24f3932124ab\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.450586 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s252\" (UniqueName: \"kubernetes.io/projected/92f15a6a-b8bc-470b-9558-72b958a8c32b-kube-api-access-2s252\") pod \"horizon-operator-controller-manager-5fb775575f-848zn\" (UID: \"92f15a6a-b8bc-470b-9558-72b958a8c32b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.450633 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9z8m\" (UniqueName: \"kubernetes.io/projected/f488ccbd-9346-4fd7-bfce-f7e5375f9100-kube-api-access-t9z8m\") pod \"designate-operator-controller-manager-6d9697b7f4-rvp9b\" (UID: \"f488ccbd-9346-4fd7-bfce-f7e5375f9100\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.450673 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9s9\" (UniqueName: \"kubernetes.io/projected/3829b622-23b9-4160-8875-b2c310b3b531-kube-api-access-zd9s9\") pod \"heat-operator-controller-manager-69d6db494d-mfshf\" (UID: \"3829b622-23b9-4160-8875-b2c310b3b531\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.450692 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g77c9\" (UniqueName: \"kubernetes.io/projected/3a514d11-28a0-4a17-9714-7a8d60216402-kube-api-access-g77c9\") pod \"glance-operator-controller-manager-8886f4c47-cbzrl\" (UID: \"3a514d11-28a0-4a17-9714-7a8d60216402\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.450718 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cvfp\" (UniqueName: \"kubernetes.io/projected/e8b0005b-18c6-4701-b22f-41d0127becf7-kube-api-access-9cvfp\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-b5scl\" (UID: \"e8b0005b-18c6-4701-b22f-41d0127becf7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.466050 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.466975 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.471516 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2sms7" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.474599 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.492518 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.493605 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.495480 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-lwd2s" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.496535 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdsqd\" (UniqueName: \"kubernetes.io/projected/0760f0c3-0076-4be3-8b2e-2dc9fcf0d929-kube-api-access-wdsqd\") pod \"cinder-operator-controller-manager-8d874c8fc-57ztz\" (UID: \"0760f0c3-0076-4be3-8b2e-2dc9fcf0d929\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.498025 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cvfp\" (UniqueName: \"kubernetes.io/projected/e8b0005b-18c6-4701-b22f-41d0127becf7-kube-api-access-9cvfp\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-b5scl\" (UID: \"e8b0005b-18c6-4701-b22f-41d0127becf7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.511647 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.518464 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.519251 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.523157 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-w2jnq" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.535050 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.536814 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.541117 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.545284 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.547465 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-7kdgs" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.552229 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpr26\" (UniqueName: \"kubernetes.io/projected/698c89f3-b4ef-443f-bce4-f1fe2fdbc1c7-kube-api-access-lpr26\") pod \"manila-operator-controller-manager-7dd968899f-44sgj\" (UID: \"698c89f3-b4ef-443f-bce4-f1fe2fdbc1c7\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.552292 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s252\" (UniqueName: \"kubernetes.io/projected/92f15a6a-b8bc-470b-9558-72b958a8c32b-kube-api-access-2s252\") pod \"horizon-operator-controller-manager-5fb775575f-848zn\" (UID: \"92f15a6a-b8bc-470b-9558-72b958a8c32b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.552325 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9z8m\" (UniqueName: \"kubernetes.io/projected/f488ccbd-9346-4fd7-bfce-f7e5375f9100-kube-api-access-t9z8m\") pod \"designate-operator-controller-manager-6d9697b7f4-rvp9b\" (UID: \"f488ccbd-9346-4fd7-bfce-f7e5375f9100\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.552358 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd9s9\" (UniqueName: \"kubernetes.io/projected/3829b622-23b9-4160-8875-b2c310b3b531-kube-api-access-zd9s9\") pod \"heat-operator-controller-manager-69d6db494d-mfshf\" (UID: \"3829b622-23b9-4160-8875-b2c310b3b531\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.552373 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g77c9\" (UniqueName: \"kubernetes.io/projected/3a514d11-28a0-4a17-9714-7a8d60216402-kube-api-access-g77c9\") pod \"glance-operator-controller-manager-8886f4c47-cbzrl\" (UID: \"3a514d11-28a0-4a17-9714-7a8d60216402\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.552402 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwgq\" (UniqueName: \"kubernetes.io/projected/b6c7167f-86c2-4e7e-8699-24f3932124ab-kube-api-access-ncwgq\") pod \"infra-operator-controller-manager-79955696d6-78rpz\" (UID: \"b6c7167f-86c2-4e7e-8699-24f3932124ab\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.552433 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert\") pod \"infra-operator-controller-manager-79955696d6-78rpz\" (UID: \"b6c7167f-86c2-4e7e-8699-24f3932124ab\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.552455 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmhbs\" (UniqueName: \"kubernetes.io/projected/29a70c36-efb8-40bc-89ec-68d20f9cf253-kube-api-access-qmhbs\") pod \"keystone-operator-controller-manager-84f48565d4-qt7k8\" (UID: \"29a70c36-efb8-40bc-89ec-68d20f9cf253\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.552477 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl2zt\" (UniqueName: \"kubernetes.io/projected/624d3845-dd5b-46eb-80cc-5a587a812d78-kube-api-access-pl2zt\") pod \"ironic-operator-controller-manager-5f4b8bd54d-phlht\" (UID: \"624d3845-dd5b-46eb-80cc-5a587a812d78\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht" Feb 04 11:43:18 crc kubenswrapper[4728]: E0204 11:43:18.553322 4728 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 04 11:43:18 crc kubenswrapper[4728]: E0204 11:43:18.553372 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert podName:b6c7167f-86c2-4e7e-8699-24f3932124ab nodeName:}" failed. No retries permitted until 2026-02-04 11:43:19.053357268 +0000 UTC m=+948.196061653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert") pod "infra-operator-controller-manager-79955696d6-78rpz" (UID: "b6c7167f-86c2-4e7e-8699-24f3932124ab") : secret "infra-operator-webhook-server-cert" not found Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.555805 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.556842 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.568101 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-44dts" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.568732 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.569648 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.571295 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bprzp" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.571602 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.576705 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd9s9\" (UniqueName: \"kubernetes.io/projected/3829b622-23b9-4160-8875-b2c310b3b531-kube-api-access-zd9s9\") pod \"heat-operator-controller-manager-69d6db494d-mfshf\" (UID: \"3829b622-23b9-4160-8875-b2c310b3b531\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.577257 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s252\" (UniqueName: \"kubernetes.io/projected/92f15a6a-b8bc-470b-9558-72b958a8c32b-kube-api-access-2s252\") pod \"horizon-operator-controller-manager-5fb775575f-848zn\" (UID: \"92f15a6a-b8bc-470b-9558-72b958a8c32b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.579980 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9z8m\" (UniqueName: \"kubernetes.io/projected/f488ccbd-9346-4fd7-bfce-f7e5375f9100-kube-api-access-t9z8m\") pod \"designate-operator-controller-manager-6d9697b7f4-rvp9b\" (UID: \"f488ccbd-9346-4fd7-bfce-f7e5375f9100\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.589779 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g77c9\" (UniqueName: \"kubernetes.io/projected/3a514d11-28a0-4a17-9714-7a8d60216402-kube-api-access-g77c9\") pod \"glance-operator-controller-manager-8886f4c47-cbzrl\" (UID: \"3a514d11-28a0-4a17-9714-7a8d60216402\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.590737 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.594087 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.596351 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncwgq\" (UniqueName: \"kubernetes.io/projected/b6c7167f-86c2-4e7e-8699-24f3932124ab-kube-api-access-ncwgq\") pod \"infra-operator-controller-manager-79955696d6-78rpz\" (UID: \"b6c7167f-86c2-4e7e-8699-24f3932124ab\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.618511 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.619708 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.625447 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-sc2j8" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.626667 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.627608 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.629023 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-d9nht" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.631792 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.636538 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.637410 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.637954 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.643253 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.643399 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-shdmb" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.654181 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmhbs\" (UniqueName: \"kubernetes.io/projected/29a70c36-efb8-40bc-89ec-68d20f9cf253-kube-api-access-qmhbs\") pod \"keystone-operator-controller-manager-84f48565d4-qt7k8\" (UID: \"29a70c36-efb8-40bc-89ec-68d20f9cf253\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.654239 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpsrf\" (UniqueName: \"kubernetes.io/projected/6d88ab1e-b850-444e-90b2-05b6e311178e-kube-api-access-vpsrf\") pod \"nova-operator-controller-manager-55bff696bd-f7fb7\" (UID: \"6d88ab1e-b850-444e-90b2-05b6e311178e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.654291 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl2zt\" (UniqueName: \"kubernetes.io/projected/624d3845-dd5b-46eb-80cc-5a587a812d78-kube-api-access-pl2zt\") pod \"ironic-operator-controller-manager-5f4b8bd54d-phlht\" (UID: \"624d3845-dd5b-46eb-80cc-5a587a812d78\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.654322 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvgt\" (UniqueName: \"kubernetes.io/projected/c833a690-9e25-4bbe-9d81-5d9cddbc7279-kube-api-access-9nvgt\") pod \"octavia-operator-controller-manager-6687f8d877-cghq5\" (UID: \"c833a690-9e25-4bbe-9d81-5d9cddbc7279\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.654706 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpr26\" (UniqueName: \"kubernetes.io/projected/698c89f3-b4ef-443f-bce4-f1fe2fdbc1c7-kube-api-access-lpr26\") pod \"manila-operator-controller-manager-7dd968899f-44sgj\" (UID: \"698c89f3-b4ef-443f-bce4-f1fe2fdbc1c7\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.654815 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdt8f\" (UniqueName: \"kubernetes.io/projected/cb387892-df64-4339-abd3-925fce438123-kube-api-access-zdt8f\") pod \"neutron-operator-controller-manager-585dbc889-r7hvc\" (UID: \"cb387892-df64-4339-abd3-925fce438123\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.654861 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7x79\" (UniqueName: \"kubernetes.io/projected/4e4f1e2f-ac6a-4dce-a074-2637e53f35a7-kube-api-access-w7x79\") pod \"mariadb-operator-controller-manager-67bf948998-hq55j\" (UID: \"4e4f1e2f-ac6a-4dce-a074-2637e53f35a7\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.656927 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.667947 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.670800 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.681351 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmhbs\" (UniqueName: \"kubernetes.io/projected/29a70c36-efb8-40bc-89ec-68d20f9cf253-kube-api-access-qmhbs\") pod \"keystone-operator-controller-manager-84f48565d4-qt7k8\" (UID: \"29a70c36-efb8-40bc-89ec-68d20f9cf253\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.681917 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl2zt\" (UniqueName: \"kubernetes.io/projected/624d3845-dd5b-46eb-80cc-5a587a812d78-kube-api-access-pl2zt\") pod \"ironic-operator-controller-manager-5f4b8bd54d-phlht\" (UID: \"624d3845-dd5b-46eb-80cc-5a587a812d78\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.698271 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.706651 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpr26\" (UniqueName: \"kubernetes.io/projected/698c89f3-b4ef-443f-bce4-f1fe2fdbc1c7-kube-api-access-lpr26\") pod \"manila-operator-controller-manager-7dd968899f-44sgj\" (UID: \"698c89f3-b4ef-443f-bce4-f1fe2fdbc1c7\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.714668 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.715708 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.719741 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-t7zhp" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.721176 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.742994 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.755270 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.756085 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdt8f\" (UniqueName: \"kubernetes.io/projected/cb387892-df64-4339-abd3-925fce438123-kube-api-access-zdt8f\") pod \"neutron-operator-controller-manager-585dbc889-r7hvc\" (UID: \"cb387892-df64-4339-abd3-925fce438123\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.756131 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn95r\" (UniqueName: \"kubernetes.io/projected/dc74eb23-85aa-4df4-8273-0af9a0a37dda-kube-api-access-dn95r\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.756165 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7x79\" (UniqueName: \"kubernetes.io/projected/4e4f1e2f-ac6a-4dce-a074-2637e53f35a7-kube-api-access-w7x79\") pod \"mariadb-operator-controller-manager-67bf948998-hq55j\" (UID: \"4e4f1e2f-ac6a-4dce-a074-2637e53f35a7\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.756201 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.756249 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpsrf\" (UniqueName: \"kubernetes.io/projected/6d88ab1e-b850-444e-90b2-05b6e311178e-kube-api-access-vpsrf\") pod \"nova-operator-controller-manager-55bff696bd-f7fb7\" (UID: \"6d88ab1e-b850-444e-90b2-05b6e311178e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.756275 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvgt\" (UniqueName: \"kubernetes.io/projected/c833a690-9e25-4bbe-9d81-5d9cddbc7279-kube-api-access-9nvgt\") pod \"octavia-operator-controller-manager-6687f8d877-cghq5\" (UID: \"c833a690-9e25-4bbe-9d81-5d9cddbc7279\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.756301 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6gz6\" (UniqueName: \"kubernetes.io/projected/86e535af-d713-4a58-80c0-0ce6a464f666-kube-api-access-m6gz6\") pod \"placement-operator-controller-manager-5b964cf4cd-bmrq5\" (UID: \"86e535af-d713-4a58-80c0-0ce6a464f666\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.756355 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcstc\" (UniqueName: \"kubernetes.io/projected/0f328a26-b914-49b2-9124-b12b968232dd-kube-api-access-jcstc\") pod \"ovn-operator-controller-manager-788c46999f-pmpvx\" (UID: \"0f328a26-b914-49b2-9124-b12b968232dd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.777626 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7x79\" (UniqueName: \"kubernetes.io/projected/4e4f1e2f-ac6a-4dce-a074-2637e53f35a7-kube-api-access-w7x79\") pod \"mariadb-operator-controller-manager-67bf948998-hq55j\" (UID: \"4e4f1e2f-ac6a-4dce-a074-2637e53f35a7\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.777702 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpsrf\" (UniqueName: \"kubernetes.io/projected/6d88ab1e-b850-444e-90b2-05b6e311178e-kube-api-access-vpsrf\") pod \"nova-operator-controller-manager-55bff696bd-f7fb7\" (UID: \"6d88ab1e-b850-444e-90b2-05b6e311178e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.777815 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdt8f\" (UniqueName: \"kubernetes.io/projected/cb387892-df64-4339-abd3-925fce438123-kube-api-access-zdt8f\") pod \"neutron-operator-controller-manager-585dbc889-r7hvc\" (UID: \"cb387892-df64-4339-abd3-925fce438123\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.785327 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvgt\" (UniqueName: \"kubernetes.io/projected/c833a690-9e25-4bbe-9d81-5d9cddbc7279-kube-api-access-9nvgt\") pod \"octavia-operator-controller-manager-6687f8d877-cghq5\" (UID: \"c833a690-9e25-4bbe-9d81-5d9cddbc7279\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.794564 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.795704 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.800118 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ccs2n" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.807205 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.831142 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.855540 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.857803 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcstc\" (UniqueName: \"kubernetes.io/projected/0f328a26-b914-49b2-9124-b12b968232dd-kube-api-access-jcstc\") pod \"ovn-operator-controller-manager-788c46999f-pmpvx\" (UID: \"0f328a26-b914-49b2-9124-b12b968232dd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.857869 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn95r\" (UniqueName: \"kubernetes.io/projected/dc74eb23-85aa-4df4-8273-0af9a0a37dda-kube-api-access-dn95r\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.857917 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6jrr\" (UniqueName: \"kubernetes.io/projected/3b49d7d8-7c63-482c-b882-25c01e798afe-kube-api-access-z6jrr\") pod \"telemetry-operator-controller-manager-6dcb54f59-lnlx2\" (UID: \"3b49d7d8-7c63-482c-b882-25c01e798afe\") " pod="openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.857954 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.858023 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxqh9\" (UniqueName: \"kubernetes.io/projected/b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81-kube-api-access-vxqh9\") pod \"swift-operator-controller-manager-68fc8c869-hflqw\" (UID: \"b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.858064 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6gz6\" (UniqueName: \"kubernetes.io/projected/86e535af-d713-4a58-80c0-0ce6a464f666-kube-api-access-m6gz6\") pod \"placement-operator-controller-manager-5b964cf4cd-bmrq5\" (UID: \"86e535af-d713-4a58-80c0-0ce6a464f666\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" Feb 04 11:43:18 crc kubenswrapper[4728]: E0204 11:43:18.860254 4728 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:18 crc kubenswrapper[4728]: E0204 11:43:18.860315 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert podName:dc74eb23-85aa-4df4-8273-0af9a0a37dda nodeName:}" failed. No retries permitted until 2026-02-04 11:43:19.360297476 +0000 UTC m=+948.503001861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" (UID: "dc74eb23-85aa-4df4-8273-0af9a0a37dda") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.860523 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.862776 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.875616 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-pnh2v" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.879466 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.881732 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.891356 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-cf2v5"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.893515 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-cf2v5" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.910010 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-hctl5" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.918297 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn95r\" (UniqueName: \"kubernetes.io/projected/dc74eb23-85aa-4df4-8273-0af9a0a37dda-kube-api-access-dn95r\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.923670 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6gz6\" (UniqueName: \"kubernetes.io/projected/86e535af-d713-4a58-80c0-0ce6a464f666-kube-api-access-m6gz6\") pod \"placement-operator-controller-manager-5b964cf4cd-bmrq5\" (UID: \"86e535af-d713-4a58-80c0-0ce6a464f666\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.927229 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcstc\" (UniqueName: \"kubernetes.io/projected/0f328a26-b914-49b2-9124-b12b968232dd-kube-api-access-jcstc\") pod \"ovn-operator-controller-manager-788c46999f-pmpvx\" (UID: \"0f328a26-b914-49b2-9124-b12b968232dd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.940612 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-cf2v5"] Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.962796 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6jrr\" (UniqueName: \"kubernetes.io/projected/3b49d7d8-7c63-482c-b882-25c01e798afe-kube-api-access-z6jrr\") pod \"telemetry-operator-controller-manager-6dcb54f59-lnlx2\" (UID: \"3b49d7d8-7c63-482c-b882-25c01e798afe\") " pod="openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.963569 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjwr8\" (UniqueName: \"kubernetes.io/projected/505cc508-1a1d-44d9-9067-ca0c376e6522-kube-api-access-rjwr8\") pod \"test-operator-controller-manager-56f8bfcd9f-wrk5q\" (UID: \"505cc508-1a1d-44d9-9067-ca0c376e6522\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.963626 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxqh9\" (UniqueName: \"kubernetes.io/projected/b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81-kube-api-access-vxqh9\") pod \"swift-operator-controller-manager-68fc8c869-hflqw\" (UID: \"b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.963656 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db6gv\" (UniqueName: \"kubernetes.io/projected/9fe9a75e-2006-4143-a451-e135b2d68297-kube-api-access-db6gv\") pod \"watcher-operator-controller-manager-564965969-cf2v5\" (UID: \"9fe9a75e-2006-4143-a451-e135b2d68297\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-cf2v5" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.965188 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.987361 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6jrr\" (UniqueName: \"kubernetes.io/projected/3b49d7d8-7c63-482c-b882-25c01e798afe-kube-api-access-z6jrr\") pod \"telemetry-operator-controller-manager-6dcb54f59-lnlx2\" (UID: \"3b49d7d8-7c63-482c-b882-25c01e798afe\") " pod="openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2" Feb 04 11:43:18 crc kubenswrapper[4728]: I0204 11:43:18.991934 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.000925 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.002004 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.002609 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxqh9\" (UniqueName: \"kubernetes.io/projected/b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81-kube-api-access-vxqh9\") pod \"swift-operator-controller-manager-68fc8c869-hflqw\" (UID: \"b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.016816 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.017468 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.017895 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.018183 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-f26lj" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.035921 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.037092 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.040287 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.041097 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-cxbzf" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.042105 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.052999 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.065390 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.065452 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4gkx\" (UniqueName: \"kubernetes.io/projected/03f4099d-cbdc-4884-a85a-2ffe82d616d1-kube-api-access-k4gkx\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.065536 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.065591 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert\") pod \"infra-operator-controller-manager-79955696d6-78rpz\" (UID: \"b6c7167f-86c2-4e7e-8699-24f3932124ab\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.065623 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjwr8\" (UniqueName: \"kubernetes.io/projected/505cc508-1a1d-44d9-9067-ca0c376e6522-kube-api-access-rjwr8\") pod \"test-operator-controller-manager-56f8bfcd9f-wrk5q\" (UID: \"505cc508-1a1d-44d9-9067-ca0c376e6522\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.065681 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db6gv\" (UniqueName: \"kubernetes.io/projected/9fe9a75e-2006-4143-a451-e135b2d68297-kube-api-access-db6gv\") pod \"watcher-operator-controller-manager-564965969-cf2v5\" (UID: \"9fe9a75e-2006-4143-a451-e135b2d68297\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-cf2v5" Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.066106 4728 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.066195 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert podName:b6c7167f-86c2-4e7e-8699-24f3932124ab nodeName:}" failed. No retries permitted until 2026-02-04 11:43:20.06617141 +0000 UTC m=+949.208875875 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert") pod "infra-operator-controller-manager-79955696d6-78rpz" (UID: "b6c7167f-86c2-4e7e-8699-24f3932124ab") : secret "infra-operator-webhook-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.075545 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.088006 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjwr8\" (UniqueName: \"kubernetes.io/projected/505cc508-1a1d-44d9-9067-ca0c376e6522-kube-api-access-rjwr8\") pod \"test-operator-controller-manager-56f8bfcd9f-wrk5q\" (UID: \"505cc508-1a1d-44d9-9067-ca0c376e6522\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.092650 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.093036 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db6gv\" (UniqueName: \"kubernetes.io/projected/9fe9a75e-2006-4143-a451-e135b2d68297-kube-api-access-db6gv\") pod \"watcher-operator-controller-manager-564965969-cf2v5\" (UID: \"9fe9a75e-2006-4143-a451-e135b2d68297\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-cf2v5" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.135629 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.157236 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.168247 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpdp7\" (UniqueName: \"kubernetes.io/projected/18e15914-8bd3-42e9-9c5b-f973b203ece8-kube-api-access-tpdp7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xzrsm\" (UID: \"18e15914-8bd3-42e9-9c5b-f973b203ece8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.168305 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.168328 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4gkx\" (UniqueName: \"kubernetes.io/projected/03f4099d-cbdc-4884-a85a-2ffe82d616d1-kube-api-access-k4gkx\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.168383 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.168514 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.168562 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:19.668546748 +0000 UTC m=+948.811251123 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "metrics-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.168841 4728 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.168873 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:19.668864896 +0000 UTC m=+948.811569281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "webhook-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.188061 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4gkx\" (UniqueName: \"kubernetes.io/projected/03f4099d-cbdc-4884-a85a-2ffe82d616d1-kube-api-access-k4gkx\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.202484 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.224290 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.240920 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-cf2v5" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.269972 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpdp7\" (UniqueName: \"kubernetes.io/projected/18e15914-8bd3-42e9-9c5b-f973b203ece8-kube-api-access-tpdp7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xzrsm\" (UID: \"18e15914-8bd3-42e9-9c5b-f973b203ece8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.304165 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpdp7\" (UniqueName: \"kubernetes.io/projected/18e15914-8bd3-42e9-9c5b-f973b203ece8-kube-api-access-tpdp7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xzrsm\" (UID: \"18e15914-8bd3-42e9-9c5b-f973b203ece8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.331296 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.345900 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.371667 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.371915 4728 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.371989 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert podName:dc74eb23-85aa-4df4-8273-0af9a0a37dda nodeName:}" failed. No retries permitted until 2026-02-04 11:43:20.371972139 +0000 UTC m=+949.514676524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" (UID: "dc74eb23-85aa-4df4-8273-0af9a0a37dda") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.379927 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm" Feb 04 11:43:19 crc kubenswrapper[4728]: W0204 11:43:19.383266 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a514d11_28a0_4a17_9714_7a8d60216402.slice/crio-f5167fac49741e35dddd6ed2b28c18d8a1e461b8cc46c4fc2574a309686674f6 WatchSource:0}: Error finding container f5167fac49741e35dddd6ed2b28c18d8a1e461b8cc46c4fc2574a309686674f6: Status 404 returned error can't find the container with id f5167fac49741e35dddd6ed2b28c18d8a1e461b8cc46c4fc2574a309686674f6 Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.526522 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.536139 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.677052 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.677382 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.679247 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.679321 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:20.679302116 +0000 UTC m=+949.822006501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "metrics-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.681198 4728 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: E0204 11:43:19.681238 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:20.681225966 +0000 UTC m=+949.823930351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "webhook-server-cert" not found Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.685451 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.786915 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl" event={"ID":"e8b0005b-18c6-4701-b22f-41d0127becf7","Type":"ContainerStarted","Data":"a5a8927bc1095b8ebb2d19c35e22f2406ed1962538cc0990698a50af0c92a138"} Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.788259 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn" event={"ID":"92f15a6a-b8bc-470b-9558-72b958a8c32b","Type":"ContainerStarted","Data":"75dd448489e4f85d3f20cd395d97a93d715ccb29a422289a44b8bc6dc2b87553"} Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.789397 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf" event={"ID":"3829b622-23b9-4160-8875-b2c310b3b531","Type":"ContainerStarted","Data":"6a1845dbddebd09d94ceb1934d1480564ebb5c3629fdd675d0fb30f14ab97618"} Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.790936 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz" event={"ID":"0760f0c3-0076-4be3-8b2e-2dc9fcf0d929","Type":"ContainerStarted","Data":"e3e4a9760479c527457e7b329bccb4cd82ab348c1d574086908243596b38cc6d"} Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.792108 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl" event={"ID":"3a514d11-28a0-4a17-9714-7a8d60216402","Type":"ContainerStarted","Data":"f5167fac49741e35dddd6ed2b28c18d8a1e461b8cc46c4fc2574a309686674f6"} Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.793202 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b" event={"ID":"f488ccbd-9346-4fd7-bfce-f7e5375f9100","Type":"ContainerStarted","Data":"8ebfa83908130a472beaa4a5eda9e95f4db6f10dc280e0ef4f73a13e420ddbd5"} Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.965590 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.970937 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j"] Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.983188 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht"] Feb 04 11:43:19 crc kubenswrapper[4728]: W0204 11:43:19.988683 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod698c89f3_b4ef_443f_bce4_f1fe2fdbc1c7.slice/crio-763e5c8513b3227505e5e656d21a11c394f7421311d2fd1804f9dd446f87bfd0 WatchSource:0}: Error finding container 763e5c8513b3227505e5e656d21a11c394f7421311d2fd1804f9dd446f87bfd0: Status 404 returned error can't find the container with id 763e5c8513b3227505e5e656d21a11c394f7421311d2fd1804f9dd446f87bfd0 Feb 04 11:43:19 crc kubenswrapper[4728]: I0204 11:43:19.992037 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8"] Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.014010 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc"] Feb 04 11:43:20 crc kubenswrapper[4728]: W0204 11:43:20.027625 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb387892_df64_4339_abd3_925fce438123.slice/crio-95badab2117fcfadce23dbd37337b741ad9d0cb7d74208b7ef9289635e1061de WatchSource:0}: Error finding container 95badab2117fcfadce23dbd37337b741ad9d0cb7d74208b7ef9289635e1061de: Status 404 returned error can't find the container with id 95badab2117fcfadce23dbd37337b741ad9d0cb7d74208b7ef9289635e1061de Feb 04 11:43:20 crc kubenswrapper[4728]: W0204 11:43:20.030740 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29a70c36_efb8_40bc_89ec_68d20f9cf253.slice/crio-988e4be633b50155ad7a8cfadc670e2ca3a8cf5c538cf343ccda1b151a7c171b WatchSource:0}: Error finding container 988e4be633b50155ad7a8cfadc670e2ca3a8cf5c538cf343ccda1b151a7c171b: Status 404 returned error can't find the container with id 988e4be633b50155ad7a8cfadc670e2ca3a8cf5c538cf343ccda1b151a7c171b Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.081936 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert\") pod \"infra-operator-controller-manager-79955696d6-78rpz\" (UID: \"b6c7167f-86c2-4e7e-8699-24f3932124ab\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.082118 4728 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.082173 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert podName:b6c7167f-86c2-4e7e-8699-24f3932124ab nodeName:}" failed. No retries permitted until 2026-02-04 11:43:22.082157927 +0000 UTC m=+951.224862312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert") pod "infra-operator-controller-manager-79955696d6-78rpz" (UID: "b6c7167f-86c2-4e7e-8699-24f3932124ab") : secret "infra-operator-webhook-server-cert" not found Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.115686 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx"] Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.126272 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5"] Feb 04 11:43:20 crc kubenswrapper[4728]: W0204 11:43:20.126964 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b49d7d8_7c63_482c_b882_25c01e798afe.slice/crio-cd83d37e960efdd8ab95bba2329ed471ec1515b72050bd53464784ac3c00bdeb WatchSource:0}: Error finding container cd83d37e960efdd8ab95bba2329ed471ec1515b72050bd53464784ac3c00bdeb: Status 404 returned error can't find the container with id cd83d37e960efdd8ab95bba2329ed471ec1515b72050bd53464784ac3c00bdeb Feb 04 11:43:20 crc kubenswrapper[4728]: W0204 11:43:20.131717 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d88ab1e_b850_444e_90b2_05b6e311178e.slice/crio-ddabf07ca8eb023392d329b2bed3dae9b434c281356dba53df4bc794f3bc562b WatchSource:0}: Error finding container ddabf07ca8eb023392d329b2bed3dae9b434c281356dba53df4bc794f3bc562b: Status 404 returned error can't find the container with id ddabf07ca8eb023392d329b2bed3dae9b434c281356dba53df4bc794f3bc562b Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.135491 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jcstc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-pmpvx_openstack-operators(0f328a26-b914-49b2-9124-b12b968232dd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.136914 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" podUID="0f328a26-b914-49b2-9124-b12b968232dd" Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.141043 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7"] Feb 04 11:43:20 crc kubenswrapper[4728]: W0204 11:43:20.145778 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86e535af_d713_4a58_80c0_0ce6a464f666.slice/crio-4210ec0eb6126c105cb3088b60580712484927f630a041151d5858bb9e816dc5 WatchSource:0}: Error finding container 4210ec0eb6126c105cb3088b60580712484927f630a041151d5858bb9e816dc5: Status 404 returned error can't find the container with id 4210ec0eb6126c105cb3088b60580712484927f630a041151d5858bb9e816dc5 Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.147201 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5"] Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.149803 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6gz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-bmrq5_openstack-operators(86e535af-d713-4a58-80c0-0ce6a464f666): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.153179 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2"] Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.152154 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" podUID="86e535af-d713-4a58-80c0-0ce6a464f666" Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.293493 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q"] Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.308685 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rjwr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-wrk5q_openstack-operators(505cc508-1a1d-44d9-9067-ca0c376e6522): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.310244 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" podUID="505cc508-1a1d-44d9-9067-ca0c376e6522" Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.314552 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-cf2v5"] Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.328996 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm"] Feb 04 11:43:20 crc kubenswrapper[4728]: W0204 11:43:20.334933 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fe9a75e_2006_4143_a451_e135b2d68297.slice/crio-e922aee5c7a70521d6a057d1f3b87c3bfd784961400d42f3ab475769d9c391b9 WatchSource:0}: Error finding container e922aee5c7a70521d6a057d1f3b87c3bfd784961400d42f3ab475769d9c391b9: Status 404 returned error can't find the container with id e922aee5c7a70521d6a057d1f3b87c3bfd784961400d42f3ab475769d9c391b9 Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.335034 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw"] Feb 04 11:43:20 crc kubenswrapper[4728]: W0204 11:43:20.336444 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18e15914_8bd3_42e9_9c5b_f973b203ece8.slice/crio-7c673e06bd13d5aea4d092d19bf06cc79a7091aef062df5d779149cff1e6e27f WatchSource:0}: Error finding container 7c673e06bd13d5aea4d092d19bf06cc79a7091aef062df5d779149cff1e6e27f: Status 404 returned error can't find the container with id 7c673e06bd13d5aea4d092d19bf06cc79a7091aef062df5d779149cff1e6e27f Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.338865 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tpdp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xzrsm_openstack-operators(18e15914-8bd3-42e9-9c5b-f973b203ece8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.340270 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm" podUID="18e15914-8bd3-42e9-9c5b-f973b203ece8" Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.349915 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxqh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-hflqw_openstack-operators(b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.351113 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" podUID="b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81" Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.386604 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.386903 4728 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.386987 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert podName:dc74eb23-85aa-4df4-8273-0af9a0a37dda nodeName:}" failed. No retries permitted until 2026-02-04 11:43:22.386965801 +0000 UTC m=+951.529670186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" (UID: "dc74eb23-85aa-4df4-8273-0af9a0a37dda") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.693796 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.693878 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.693979 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.694027 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:22.694012082 +0000 UTC m=+951.836716467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "metrics-server-cert" not found Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.694348 4728 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.694373 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:22.6943664 +0000 UTC m=+951.837070785 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "webhook-server-cert" not found Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.813922 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7" event={"ID":"6d88ab1e-b850-444e-90b2-05b6e311178e","Type":"ContainerStarted","Data":"ddabf07ca8eb023392d329b2bed3dae9b434c281356dba53df4bc794f3bc562b"} Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.818347 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8" event={"ID":"29a70c36-efb8-40bc-89ec-68d20f9cf253","Type":"ContainerStarted","Data":"988e4be633b50155ad7a8cfadc670e2ca3a8cf5c538cf343ccda1b151a7c171b"} Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.823039 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j" event={"ID":"4e4f1e2f-ac6a-4dce-a074-2637e53f35a7","Type":"ContainerStarted","Data":"5a09931bb703bfe83a62d7efdaca4f29ff6a9443844ca44cf561c5a1bb54795b"} Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.824310 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc" event={"ID":"cb387892-df64-4339-abd3-925fce438123","Type":"ContainerStarted","Data":"95badab2117fcfadce23dbd37337b741ad9d0cb7d74208b7ef9289635e1061de"} Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.827002 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2" event={"ID":"3b49d7d8-7c63-482c-b882-25c01e798afe","Type":"ContainerStarted","Data":"cd83d37e960efdd8ab95bba2329ed471ec1515b72050bd53464784ac3c00bdeb"} Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.835768 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-cf2v5" event={"ID":"9fe9a75e-2006-4143-a451-e135b2d68297","Type":"ContainerStarted","Data":"e922aee5c7a70521d6a057d1f3b87c3bfd784961400d42f3ab475769d9c391b9"} Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.837284 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" event={"ID":"0f328a26-b914-49b2-9124-b12b968232dd","Type":"ContainerStarted","Data":"3cb6aea6fe6fc15866a8d65c987cf4b9aa41ec91ece1e647d83eafd59f0569f8"} Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.841721 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" podUID="0f328a26-b914-49b2-9124-b12b968232dd" Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.845295 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht" event={"ID":"624d3845-dd5b-46eb-80cc-5a587a812d78","Type":"ContainerStarted","Data":"32a7451e61513154d254fd086edcfb7835e05c8254a47ea70067f6b654c37f34"} Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.847271 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" event={"ID":"505cc508-1a1d-44d9-9067-ca0c376e6522","Type":"ContainerStarted","Data":"563fdc84897ba663f8d6023299e591666411c0d47b8a95757445b841091eb454"} Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.860082 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" podUID="505cc508-1a1d-44d9-9067-ca0c376e6522" Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.874104 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5" event={"ID":"c833a690-9e25-4bbe-9d81-5d9cddbc7279","Type":"ContainerStarted","Data":"e7a1ea08b889ed22624466708e66d5b632a1dde473ce929dea44f82628c022b3"} Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.875579 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm" event={"ID":"18e15914-8bd3-42e9-9c5b-f973b203ece8","Type":"ContainerStarted","Data":"7c673e06bd13d5aea4d092d19bf06cc79a7091aef062df5d779149cff1e6e27f"} Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.878005 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm" podUID="18e15914-8bd3-42e9-9c5b-f973b203ece8" Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.878745 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj" event={"ID":"698c89f3-b4ef-443f-bce4-f1fe2fdbc1c7","Type":"ContainerStarted","Data":"763e5c8513b3227505e5e656d21a11c394f7421311d2fd1804f9dd446f87bfd0"} Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.880326 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" event={"ID":"b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81","Type":"ContainerStarted","Data":"42ecc067cd272275a4f43a572d924951d4e132fce82270133a5c973206884ead"} Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.884166 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" podUID="b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81" Feb 04 11:43:20 crc kubenswrapper[4728]: I0204 11:43:20.884693 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" event={"ID":"86e535af-d713-4a58-80c0-0ce6a464f666","Type":"ContainerStarted","Data":"4210ec0eb6126c105cb3088b60580712484927f630a041151d5858bb9e816dc5"} Feb 04 11:43:20 crc kubenswrapper[4728]: E0204 11:43:20.887598 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" podUID="86e535af-d713-4a58-80c0-0ce6a464f666" Feb 04 11:43:21 crc kubenswrapper[4728]: E0204 11:43:21.920640 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" podUID="0f328a26-b914-49b2-9124-b12b968232dd" Feb 04 11:43:21 crc kubenswrapper[4728]: E0204 11:43:21.921081 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" podUID="86e535af-d713-4a58-80c0-0ce6a464f666" Feb 04 11:43:21 crc kubenswrapper[4728]: E0204 11:43:21.921103 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm" podUID="18e15914-8bd3-42e9-9c5b-f973b203ece8" Feb 04 11:43:21 crc kubenswrapper[4728]: E0204 11:43:21.921130 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" podUID="b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81" Feb 04 11:43:21 crc kubenswrapper[4728]: E0204 11:43:21.928068 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" podUID="505cc508-1a1d-44d9-9067-ca0c376e6522" Feb 04 11:43:22 crc kubenswrapper[4728]: I0204 11:43:22.124960 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert\") pod \"infra-operator-controller-manager-79955696d6-78rpz\" (UID: \"b6c7167f-86c2-4e7e-8699-24f3932124ab\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:22 crc kubenswrapper[4728]: E0204 11:43:22.125101 4728 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 04 11:43:22 crc kubenswrapper[4728]: E0204 11:43:22.125147 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert podName:b6c7167f-86c2-4e7e-8699-24f3932124ab nodeName:}" failed. No retries permitted until 2026-02-04 11:43:26.125132833 +0000 UTC m=+955.267837208 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert") pod "infra-operator-controller-manager-79955696d6-78rpz" (UID: "b6c7167f-86c2-4e7e-8699-24f3932124ab") : secret "infra-operator-webhook-server-cert" not found Feb 04 11:43:22 crc kubenswrapper[4728]: I0204 11:43:22.433765 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:22 crc kubenswrapper[4728]: E0204 11:43:22.433995 4728 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:22 crc kubenswrapper[4728]: E0204 11:43:22.434039 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert podName:dc74eb23-85aa-4df4-8273-0af9a0a37dda nodeName:}" failed. No retries permitted until 2026-02-04 11:43:26.434025781 +0000 UTC m=+955.576730166 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" (UID: "dc74eb23-85aa-4df4-8273-0af9a0a37dda") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:22 crc kubenswrapper[4728]: I0204 11:43:22.737851 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:22 crc kubenswrapper[4728]: I0204 11:43:22.737932 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:22 crc kubenswrapper[4728]: E0204 11:43:22.737980 4728 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 04 11:43:22 crc kubenswrapper[4728]: E0204 11:43:22.738049 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 04 11:43:22 crc kubenswrapper[4728]: E0204 11:43:22.738055 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:26.738037774 +0000 UTC m=+955.880742159 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "webhook-server-cert" not found Feb 04 11:43:22 crc kubenswrapper[4728]: E0204 11:43:22.738131 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:26.738110706 +0000 UTC m=+955.880815091 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "metrics-server-cert" not found Feb 04 11:43:26 crc kubenswrapper[4728]: I0204 11:43:26.194134 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert\") pod \"infra-operator-controller-manager-79955696d6-78rpz\" (UID: \"b6c7167f-86c2-4e7e-8699-24f3932124ab\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:26 crc kubenswrapper[4728]: E0204 11:43:26.194668 4728 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 04 11:43:26 crc kubenswrapper[4728]: E0204 11:43:26.194732 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert podName:b6c7167f-86c2-4e7e-8699-24f3932124ab nodeName:}" failed. No retries permitted until 2026-02-04 11:43:34.194712487 +0000 UTC m=+963.337416872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert") pod "infra-operator-controller-manager-79955696d6-78rpz" (UID: "b6c7167f-86c2-4e7e-8699-24f3932124ab") : secret "infra-operator-webhook-server-cert" not found Feb 04 11:43:26 crc kubenswrapper[4728]: I0204 11:43:26.498394 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:26 crc kubenswrapper[4728]: E0204 11:43:26.498634 4728 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:26 crc kubenswrapper[4728]: E0204 11:43:26.498689 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert podName:dc74eb23-85aa-4df4-8273-0af9a0a37dda nodeName:}" failed. No retries permitted until 2026-02-04 11:43:34.498671629 +0000 UTC m=+963.641376014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" (UID: "dc74eb23-85aa-4df4-8273-0af9a0a37dda") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:26 crc kubenswrapper[4728]: I0204 11:43:26.804544 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:26 crc kubenswrapper[4728]: I0204 11:43:26.804649 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:26 crc kubenswrapper[4728]: E0204 11:43:26.804663 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 04 11:43:26 crc kubenswrapper[4728]: E0204 11:43:26.804741 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:34.804717824 +0000 UTC m=+963.947422209 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "metrics-server-cert" not found Feb 04 11:43:26 crc kubenswrapper[4728]: E0204 11:43:26.804811 4728 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 04 11:43:26 crc kubenswrapper[4728]: E0204 11:43:26.804857 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:34.804843078 +0000 UTC m=+963.947547463 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "webhook-server-cert" not found Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.003259 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl" event={"ID":"3a514d11-28a0-4a17-9714-7a8d60216402","Type":"ContainerStarted","Data":"273f93126852ecc4f8849a5d19a8ed900136d7af97735746a0820ce80a15b83b"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.004693 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.007434 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7" event={"ID":"6d88ab1e-b850-444e-90b2-05b6e311178e","Type":"ContainerStarted","Data":"66e8bf2e813db6fe0c1d47a80c8eaf1d1d8731a86225add77b59c379c2a04a15"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.007909 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.011069 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc" event={"ID":"cb387892-df64-4339-abd3-925fce438123","Type":"ContainerStarted","Data":"9e447f289263e2d3c629f770edefe2d23298bd7ad0a8fb558c2e126d44e07751"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.011646 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.019384 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5" event={"ID":"c833a690-9e25-4bbe-9d81-5d9cddbc7279","Type":"ContainerStarted","Data":"50b1bb9f46bf72d083ed78e54faac871db7c9b9a6bb5f8897697fb784f728132"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.019858 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.028706 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn" event={"ID":"92f15a6a-b8bc-470b-9558-72b958a8c32b","Type":"ContainerStarted","Data":"d1a98b817968034ee967e2f922297df83987f3a5bd82e06911e1e7adaf867d85"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.028954 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.034358 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj" event={"ID":"698c89f3-b4ef-443f-bce4-f1fe2fdbc1c7","Type":"ContainerStarted","Data":"c80d03ec2f4806aa45994ae4cc4e61e9d44bea73b114dbc2204e71e03d3732f3"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.034806 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.042332 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf" event={"ID":"3829b622-23b9-4160-8875-b2c310b3b531","Type":"ContainerStarted","Data":"b623f37d1c8fb637fa0f2eef5c2744e53fd7fb3be6e4961de8bb69d9e9d7decd"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.042531 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.050537 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j" event={"ID":"4e4f1e2f-ac6a-4dce-a074-2637e53f35a7","Type":"ContainerStarted","Data":"ddfaaceeb93fa2a3bf0bca4cc11cdf4f544394f52d023fa99c26c95a06fe710c"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.050648 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.051511 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl" podStartSLOduration=2.6277339140000002 podStartE2EDuration="16.051491692s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:19.413454246 +0000 UTC m=+948.556158631" lastFinishedPulling="2026-02-04 11:43:32.837212024 +0000 UTC m=+961.979916409" observedRunningTime="2026-02-04 11:43:34.044855873 +0000 UTC m=+963.187560268" watchObservedRunningTime="2026-02-04 11:43:34.051491692 +0000 UTC m=+963.194196077" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.065220 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl" event={"ID":"e8b0005b-18c6-4701-b22f-41d0127becf7","Type":"ContainerStarted","Data":"cd7e3b889ebfcdf89b8d53920ee42da54839eaac8b32660e281cf363ed0df1d9"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.065387 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.080381 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-cf2v5" event={"ID":"9fe9a75e-2006-4143-a451-e135b2d68297","Type":"ContainerStarted","Data":"d1058f0f6e9c1a4ad7c3f80116d40be4dd5719f1bf234c9d15ace508f8f66852"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.080554 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-cf2v5" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.088428 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht" event={"ID":"624d3845-dd5b-46eb-80cc-5a587a812d78","Type":"ContainerStarted","Data":"64e07539b8e9451bd460ff6c4e70218cb099ccde8097ecf32bea802fcfb4abe1"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.089277 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.099821 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2" event={"ID":"3b49d7d8-7c63-482c-b882-25c01e798afe","Type":"ContainerStarted","Data":"69f5e3c31bcf932006908c884e28c7a1bbb98094d73ce984c59ced38ffc2c36e"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.099979 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.104409 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8" event={"ID":"29a70c36-efb8-40bc-89ec-68d20f9cf253","Type":"ContainerStarted","Data":"4ab4b48df9d8a16aaefb008f75f46ff72c32fb2c61d2fb3afdcb4b4c0c00902a"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.104494 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.106654 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b" event={"ID":"f488ccbd-9346-4fd7-bfce-f7e5375f9100","Type":"ContainerStarted","Data":"5877d7148d56cc066aead0c8930b41c6d1a14af411da60a28c0ffa6b19052841"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.106791 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.108968 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz" event={"ID":"0760f0c3-0076-4be3-8b2e-2dc9fcf0d929","Type":"ContainerStarted","Data":"227595e6c5f92f875b8be211df516c82d6b092520061fcf1b6818747795013c8"} Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.109130 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.128239 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc" podStartSLOduration=3.280623663 podStartE2EDuration="16.128216917s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.031290212 +0000 UTC m=+949.173994597" lastFinishedPulling="2026-02-04 11:43:32.878883476 +0000 UTC m=+962.021587851" observedRunningTime="2026-02-04 11:43:34.12360903 +0000 UTC m=+963.266313415" watchObservedRunningTime="2026-02-04 11:43:34.128216917 +0000 UTC m=+963.270921302" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.157281 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf" podStartSLOduration=2.886359052 podStartE2EDuration="16.157266747s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:19.610945756 +0000 UTC m=+948.753650141" lastFinishedPulling="2026-02-04 11:43:32.881853441 +0000 UTC m=+962.024557836" observedRunningTime="2026-02-04 11:43:34.151872849 +0000 UTC m=+963.294577234" watchObservedRunningTime="2026-02-04 11:43:34.157266747 +0000 UTC m=+963.299971132" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.182218 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5" podStartSLOduration=3.422878696 podStartE2EDuration="16.182194112s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.127957434 +0000 UTC m=+949.270661809" lastFinishedPulling="2026-02-04 11:43:32.88727284 +0000 UTC m=+962.029977225" observedRunningTime="2026-02-04 11:43:34.180980441 +0000 UTC m=+963.323684826" watchObservedRunningTime="2026-02-04 11:43:34.182194112 +0000 UTC m=+963.324898497" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.231441 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert\") pod \"infra-operator-controller-manager-79955696d6-78rpz\" (UID: \"b6c7167f-86c2-4e7e-8699-24f3932124ab\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.248862 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7" podStartSLOduration=3.429886635 podStartE2EDuration="16.248848179s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.135159157 +0000 UTC m=+949.277863542" lastFinishedPulling="2026-02-04 11:43:32.954120701 +0000 UTC m=+962.096825086" observedRunningTime="2026-02-04 11:43:34.248214663 +0000 UTC m=+963.390919048" watchObservedRunningTime="2026-02-04 11:43:34.248848179 +0000 UTC m=+963.391552564" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.249379 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj" podStartSLOduration=3.421608035 podStartE2EDuration="16.249373583s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.010625886 +0000 UTC m=+949.153330271" lastFinishedPulling="2026-02-04 11:43:32.838391434 +0000 UTC m=+961.981095819" observedRunningTime="2026-02-04 11:43:34.217143272 +0000 UTC m=+963.359847657" watchObservedRunningTime="2026-02-04 11:43:34.249373583 +0000 UTC m=+963.392077968" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.253168 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6c7167f-86c2-4e7e-8699-24f3932124ab-cert\") pod \"infra-operator-controller-manager-79955696d6-78rpz\" (UID: \"b6c7167f-86c2-4e7e-8699-24f3932124ab\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.273478 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn" podStartSLOduration=3.132431718 podStartE2EDuration="16.273457486s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:19.696155625 +0000 UTC m=+948.838860010" lastFinishedPulling="2026-02-04 11:43:32.837181393 +0000 UTC m=+961.979885778" observedRunningTime="2026-02-04 11:43:34.270968852 +0000 UTC m=+963.413673247" watchObservedRunningTime="2026-02-04 11:43:34.273457486 +0000 UTC m=+963.416161871" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.309437 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2" podStartSLOduration=3.556661613 podStartE2EDuration="16.309418251s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.128423406 +0000 UTC m=+949.271127791" lastFinishedPulling="2026-02-04 11:43:32.881180034 +0000 UTC m=+962.023884429" observedRunningTime="2026-02-04 11:43:34.306861497 +0000 UTC m=+963.449565902" watchObservedRunningTime="2026-02-04 11:43:34.309418251 +0000 UTC m=+963.452122636" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.332887 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-cf2v5" podStartSLOduration=3.789411523 podStartE2EDuration="16.332869389s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.337783739 +0000 UTC m=+949.480488134" lastFinishedPulling="2026-02-04 11:43:32.881241615 +0000 UTC m=+962.023946000" observedRunningTime="2026-02-04 11:43:34.332398837 +0000 UTC m=+963.475103222" watchObservedRunningTime="2026-02-04 11:43:34.332869389 +0000 UTC m=+963.475573774" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.380472 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zwn6n" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.389564 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.393017 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz" podStartSLOduration=2.91927964 podStartE2EDuration="16.393007131s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:19.412881571 +0000 UTC m=+948.555585946" lastFinishedPulling="2026-02-04 11:43:32.886609052 +0000 UTC m=+962.029313437" observedRunningTime="2026-02-04 11:43:34.391292147 +0000 UTC m=+963.533996522" watchObservedRunningTime="2026-02-04 11:43:34.393007131 +0000 UTC m=+963.535711516" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.395323 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8" podStartSLOduration=3.511938845 podStartE2EDuration="16.39531829s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.033869078 +0000 UTC m=+949.176573463" lastFinishedPulling="2026-02-04 11:43:32.917248523 +0000 UTC m=+962.059952908" observedRunningTime="2026-02-04 11:43:34.354659814 +0000 UTC m=+963.497364219" watchObservedRunningTime="2026-02-04 11:43:34.39531829 +0000 UTC m=+963.538022675" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.418651 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl" podStartSLOduration=2.89615723 podStartE2EDuration="16.418634913s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:19.314723791 +0000 UTC m=+948.457428176" lastFinishedPulling="2026-02-04 11:43:32.837201484 +0000 UTC m=+961.979905859" observedRunningTime="2026-02-04 11:43:34.412609 +0000 UTC m=+963.555313395" watchObservedRunningTime="2026-02-04 11:43:34.418634913 +0000 UTC m=+963.561339298" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.442354 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j" podStartSLOduration=3.5817867039999998 podStartE2EDuration="16.442337557s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.018355643 +0000 UTC m=+949.161060028" lastFinishedPulling="2026-02-04 11:43:32.878906496 +0000 UTC m=+962.021610881" observedRunningTime="2026-02-04 11:43:34.440406698 +0000 UTC m=+963.583111083" watchObservedRunningTime="2026-02-04 11:43:34.442337557 +0000 UTC m=+963.585041942" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.477783 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht" podStartSLOduration=3.650810072 podStartE2EDuration="16.47776708s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.010243306 +0000 UTC m=+949.152947691" lastFinishedPulling="2026-02-04 11:43:32.837200314 +0000 UTC m=+961.979904699" observedRunningTime="2026-02-04 11:43:34.476569669 +0000 UTC m=+963.619274054" watchObservedRunningTime="2026-02-04 11:43:34.47776708 +0000 UTC m=+963.620471465" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.500021 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b" podStartSLOduration=3.239159877 podStartE2EDuration="16.500000736s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:19.626846131 +0000 UTC m=+948.769550516" lastFinishedPulling="2026-02-04 11:43:32.88768699 +0000 UTC m=+962.030391375" observedRunningTime="2026-02-04 11:43:34.493668325 +0000 UTC m=+963.636372710" watchObservedRunningTime="2026-02-04 11:43:34.500000736 +0000 UTC m=+963.642705121" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.534810 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:34 crc kubenswrapper[4728]: E0204 11:43:34.535292 4728 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:34 crc kubenswrapper[4728]: E0204 11:43:34.535458 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert podName:dc74eb23-85aa-4df4-8273-0af9a0a37dda nodeName:}" failed. No retries permitted until 2026-02-04 11:43:50.535436379 +0000 UTC m=+979.678140824 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" (UID: "dc74eb23-85aa-4df4-8273-0af9a0a37dda") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.839698 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-78rpz"] Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.841438 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:34 crc kubenswrapper[4728]: I0204 11:43:34.841514 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:34 crc kubenswrapper[4728]: E0204 11:43:34.841618 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 04 11:43:34 crc kubenswrapper[4728]: E0204 11:43:34.841668 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:50.841654719 +0000 UTC m=+979.984359104 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "metrics-server-cert" not found Feb 04 11:43:34 crc kubenswrapper[4728]: E0204 11:43:34.841813 4728 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 04 11:43:34 crc kubenswrapper[4728]: E0204 11:43:34.841941 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs podName:03f4099d-cbdc-4884-a85a-2ffe82d616d1 nodeName:}" failed. No retries permitted until 2026-02-04 11:43:50.841922525 +0000 UTC m=+979.984626980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs") pod "openstack-operator-controller-manager-67db8bbf87-ffl8t" (UID: "03f4099d-cbdc-4884-a85a-2ffe82d616d1") : secret "webhook-server-cert" not found Feb 04 11:43:35 crc kubenswrapper[4728]: I0204 11:43:35.117472 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" event={"ID":"b6c7167f-86c2-4e7e-8699-24f3932124ab","Type":"ContainerStarted","Data":"c4a4609db55a863272aa568beaa140f03525de302d5747a41e3956a86f3917b0"} Feb 04 11:43:35 crc kubenswrapper[4728]: I0204 11:43:35.448615 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:43:35 crc kubenswrapper[4728]: I0204 11:43:35.448677 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:43:37 crc kubenswrapper[4728]: I0204 11:43:37.139451 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" event={"ID":"505cc508-1a1d-44d9-9067-ca0c376e6522","Type":"ContainerStarted","Data":"2044540b050f111de0cd0f5607af3fe48b9d944d0f3c3f959261130b1af45f68"} Feb 04 11:43:37 crc kubenswrapper[4728]: I0204 11:43:37.141081 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" Feb 04 11:43:37 crc kubenswrapper[4728]: I0204 11:43:37.156333 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" podStartSLOduration=3.176096011 podStartE2EDuration="19.156315484s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.308484452 +0000 UTC m=+949.451188847" lastFinishedPulling="2026-02-04 11:43:36.288703935 +0000 UTC m=+965.431408320" observedRunningTime="2026-02-04 11:43:37.155173165 +0000 UTC m=+966.297877570" watchObservedRunningTime="2026-02-04 11:43:37.156315484 +0000 UTC m=+966.299019869" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.149184 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" event={"ID":"b6c7167f-86c2-4e7e-8699-24f3932124ab","Type":"ContainerStarted","Data":"cb990b253eb27b6806686bee26759f387cef75a5bcf6c1761ef1d6e98e254e78"} Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.149288 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.153264 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" event={"ID":"0f328a26-b914-49b2-9124-b12b968232dd","Type":"ContainerStarted","Data":"1d6f406132e7a61b3acd3f62d20c6eb7cdbfc1a65140ee7c8d7d18e243f80335"} Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.153885 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.166624 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" podStartSLOduration=17.11694084 podStartE2EDuration="20.166610336s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:34.849226921 +0000 UTC m=+963.991931306" lastFinishedPulling="2026-02-04 11:43:37.898896417 +0000 UTC m=+967.041600802" observedRunningTime="2026-02-04 11:43:38.162843971 +0000 UTC m=+967.305548366" watchObservedRunningTime="2026-02-04 11:43:38.166610336 +0000 UTC m=+967.309314721" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.187534 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" podStartSLOduration=2.440449203 podStartE2EDuration="20.187514368s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.135394073 +0000 UTC m=+949.278098458" lastFinishedPulling="2026-02-04 11:43:37.882459238 +0000 UTC m=+967.025163623" observedRunningTime="2026-02-04 11:43:38.181641829 +0000 UTC m=+967.324346224" watchObservedRunningTime="2026-02-04 11:43:38.187514368 +0000 UTC m=+967.330218753" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.576579 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-b5scl" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.641514 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-57ztz" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.673774 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rvp9b" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.704309 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-cbzrl" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.747213 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-848zn" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.766717 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-mfshf" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.833973 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-phlht" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.858233 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-qt7k8" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.885569 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-44sgj" Feb 04 11:43:38 crc kubenswrapper[4728]: I0204 11:43:38.972190 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hq55j" Feb 04 11:43:39 crc kubenswrapper[4728]: I0204 11:43:39.010814 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-r7hvc" Feb 04 11:43:39 crc kubenswrapper[4728]: I0204 11:43:39.047790 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-cghq5" Feb 04 11:43:39 crc kubenswrapper[4728]: I0204 11:43:39.068338 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-f7fb7" Feb 04 11:43:39 crc kubenswrapper[4728]: I0204 11:43:39.160596 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6dcb54f59-lnlx2" Feb 04 11:43:39 crc kubenswrapper[4728]: I0204 11:43:39.244351 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-cf2v5" Feb 04 11:43:42 crc kubenswrapper[4728]: I0204 11:43:42.180940 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm" event={"ID":"18e15914-8bd3-42e9-9c5b-f973b203ece8","Type":"ContainerStarted","Data":"292d6c9a111bbee0a995cb54708b39743d99cabba22cbeb0325b6218a1ade242"} Feb 04 11:43:42 crc kubenswrapper[4728]: I0204 11:43:42.182737 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" event={"ID":"86e535af-d713-4a58-80c0-0ce6a464f666","Type":"ContainerStarted","Data":"503290c50d680c53f1d99ca34b39d2e5a4d789736fc5c6a8dbb8ed2a7d01d65b"} Feb 04 11:43:42 crc kubenswrapper[4728]: I0204 11:43:42.182964 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" Feb 04 11:43:42 crc kubenswrapper[4728]: I0204 11:43:42.184368 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" event={"ID":"b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81","Type":"ContainerStarted","Data":"40935f7bbd1aee93ed930d6daf884a6fd0c9028529f18dd7e34fd4b765d2beb6"} Feb 04 11:43:42 crc kubenswrapper[4728]: I0204 11:43:42.184575 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" Feb 04 11:43:42 crc kubenswrapper[4728]: I0204 11:43:42.202995 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzrsm" podStartSLOduration=1.9106227709999999 podStartE2EDuration="23.202973518s" podCreationTimestamp="2026-02-04 11:43:19 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.338702381 +0000 UTC m=+949.481406766" lastFinishedPulling="2026-02-04 11:43:41.631053128 +0000 UTC m=+970.773757513" observedRunningTime="2026-02-04 11:43:42.202105888 +0000 UTC m=+971.344810273" watchObservedRunningTime="2026-02-04 11:43:42.202973518 +0000 UTC m=+971.345677903" Feb 04 11:43:42 crc kubenswrapper[4728]: I0204 11:43:42.221563 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" podStartSLOduration=2.9402290840000003 podStartE2EDuration="24.221544499s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.349739083 +0000 UTC m=+949.492443468" lastFinishedPulling="2026-02-04 11:43:41.631054498 +0000 UTC m=+970.773758883" observedRunningTime="2026-02-04 11:43:42.21647457 +0000 UTC m=+971.359178955" watchObservedRunningTime="2026-02-04 11:43:42.221544499 +0000 UTC m=+971.364248884" Feb 04 11:43:42 crc kubenswrapper[4728]: I0204 11:43:42.235680 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" podStartSLOduration=2.7807662090000003 podStartE2EDuration="24.235662245s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:20.149468752 +0000 UTC m=+949.292173137" lastFinishedPulling="2026-02-04 11:43:41.604364788 +0000 UTC m=+970.747069173" observedRunningTime="2026-02-04 11:43:42.230701298 +0000 UTC m=+971.373405683" watchObservedRunningTime="2026-02-04 11:43:42.235662245 +0000 UTC m=+971.378366630" Feb 04 11:43:44 crc kubenswrapper[4728]: I0204 11:43:44.396397 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-78rpz" Feb 04 11:43:49 crc kubenswrapper[4728]: I0204 11:43:49.077684 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-pmpvx" Feb 04 11:43:49 crc kubenswrapper[4728]: I0204 11:43:49.096114 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bmrq5" Feb 04 11:43:49 crc kubenswrapper[4728]: I0204 11:43:49.138803 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hflqw" Feb 04 11:43:49 crc kubenswrapper[4728]: I0204 11:43:49.204605 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-wrk5q" Feb 04 11:43:50 crc kubenswrapper[4728]: I0204 11:43:50.603548 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:50 crc kubenswrapper[4728]: I0204 11:43:50.610551 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc74eb23-85aa-4df4-8273-0af9a0a37dda-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dl677q\" (UID: \"dc74eb23-85aa-4df4-8273-0af9a0a37dda\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:50 crc kubenswrapper[4728]: I0204 11:43:50.623816 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-shdmb" Feb 04 11:43:50 crc kubenswrapper[4728]: I0204 11:43:50.633053 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:43:50 crc kubenswrapper[4728]: I0204 11:43:50.910061 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:50 crc kubenswrapper[4728]: I0204 11:43:50.910479 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:50 crc kubenswrapper[4728]: I0204 11:43:50.915092 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-webhook-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:50 crc kubenswrapper[4728]: I0204 11:43:50.915616 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03f4099d-cbdc-4884-a85a-2ffe82d616d1-metrics-certs\") pod \"openstack-operator-controller-manager-67db8bbf87-ffl8t\" (UID: \"03f4099d-cbdc-4884-a85a-2ffe82d616d1\") " pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:51 crc kubenswrapper[4728]: I0204 11:43:51.126061 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q"] Feb 04 11:43:51 crc kubenswrapper[4728]: W0204 11:43:51.135908 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc74eb23_85aa_4df4_8273_0af9a0a37dda.slice/crio-808ad99355b0e75695e69d6002266398b253bf054679728cb5028f103aa78f9f WatchSource:0}: Error finding container 808ad99355b0e75695e69d6002266398b253bf054679728cb5028f103aa78f9f: Status 404 returned error can't find the container with id 808ad99355b0e75695e69d6002266398b253bf054679728cb5028f103aa78f9f Feb 04 11:43:51 crc kubenswrapper[4728]: I0204 11:43:51.154087 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-f26lj" Feb 04 11:43:51 crc kubenswrapper[4728]: I0204 11:43:51.163213 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:51 crc kubenswrapper[4728]: I0204 11:43:51.262811 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" event={"ID":"dc74eb23-85aa-4df4-8273-0af9a0a37dda","Type":"ContainerStarted","Data":"808ad99355b0e75695e69d6002266398b253bf054679728cb5028f103aa78f9f"} Feb 04 11:43:51 crc kubenswrapper[4728]: I0204 11:43:51.382944 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t"] Feb 04 11:43:51 crc kubenswrapper[4728]: W0204 11:43:51.386477 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03f4099d_cbdc_4884_a85a_2ffe82d616d1.slice/crio-45f871959a2eb63c1f321dde7fcbf807f1df712e5fff184d9f2e6a9673030ac6 WatchSource:0}: Error finding container 45f871959a2eb63c1f321dde7fcbf807f1df712e5fff184d9f2e6a9673030ac6: Status 404 returned error can't find the container with id 45f871959a2eb63c1f321dde7fcbf807f1df712e5fff184d9f2e6a9673030ac6 Feb 04 11:43:52 crc kubenswrapper[4728]: I0204 11:43:52.284147 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" event={"ID":"03f4099d-cbdc-4884-a85a-2ffe82d616d1","Type":"ContainerStarted","Data":"45f871959a2eb63c1f321dde7fcbf807f1df712e5fff184d9f2e6a9673030ac6"} Feb 04 11:43:58 crc kubenswrapper[4728]: I0204 11:43:58.333877 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" event={"ID":"03f4099d-cbdc-4884-a85a-2ffe82d616d1","Type":"ContainerStarted","Data":"bd712f5543b7ca1ae76fa6fc4d7ce04688ede91a6adf1f6e7c151a19e1a057a5"} Feb 04 11:43:59 crc kubenswrapper[4728]: I0204 11:43:59.340635 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:43:59 crc kubenswrapper[4728]: I0204 11:43:59.368538 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" podStartSLOduration=41.368523269 podStartE2EDuration="41.368523269s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:43:59.366948631 +0000 UTC m=+988.509653006" watchObservedRunningTime="2026-02-04 11:43:59.368523269 +0000 UTC m=+988.511227644" Feb 04 11:44:02 crc kubenswrapper[4728]: I0204 11:44:02.359735 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" event={"ID":"dc74eb23-85aa-4df4-8273-0af9a0a37dda","Type":"ContainerStarted","Data":"5f1afe3c8b38095c1be2ebbdb5a7543b1492fd3c41b2f8608b77e03779e600d8"} Feb 04 11:44:02 crc kubenswrapper[4728]: I0204 11:44:02.360271 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:44:02 crc kubenswrapper[4728]: I0204 11:44:02.390189 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" podStartSLOduration=33.818221617 podStartE2EDuration="44.390164077s" podCreationTimestamp="2026-02-04 11:43:18 +0000 UTC" firstStartedPulling="2026-02-04 11:43:51.138026158 +0000 UTC m=+980.280730563" lastFinishedPulling="2026-02-04 11:44:01.709968608 +0000 UTC m=+990.852673023" observedRunningTime="2026-02-04 11:44:02.381129153 +0000 UTC m=+991.523833558" watchObservedRunningTime="2026-02-04 11:44:02.390164077 +0000 UTC m=+991.532868482" Feb 04 11:44:05 crc kubenswrapper[4728]: I0204 11:44:05.448615 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:44:05 crc kubenswrapper[4728]: I0204 11:44:05.448971 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:44:05 crc kubenswrapper[4728]: I0204 11:44:05.449019 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:44:05 crc kubenswrapper[4728]: I0204 11:44:05.449623 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57f05c207a10ae4fedd99430e02b6ac5fa7f5bce4ce363fc2daec5fefcb4c117"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 11:44:05 crc kubenswrapper[4728]: I0204 11:44:05.449690 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://57f05c207a10ae4fedd99430e02b6ac5fa7f5bce4ce363fc2daec5fefcb4c117" gracePeriod=600 Feb 04 11:44:06 crc kubenswrapper[4728]: I0204 11:44:06.388942 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="57f05c207a10ae4fedd99430e02b6ac5fa7f5bce4ce363fc2daec5fefcb4c117" exitCode=0 Feb 04 11:44:06 crc kubenswrapper[4728]: I0204 11:44:06.389001 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"57f05c207a10ae4fedd99430e02b6ac5fa7f5bce4ce363fc2daec5fefcb4c117"} Feb 04 11:44:06 crc kubenswrapper[4728]: I0204 11:44:06.389277 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"c9955d85c683603107a50c1f93858af2076e1b6307f2485d080e9953f839e1ba"} Feb 04 11:44:06 crc kubenswrapper[4728]: I0204 11:44:06.389302 4728 scope.go:117] "RemoveContainer" containerID="d1141676880e32a0c8de5aba6aaf202ec56fa7791680367a5b1bd8fc7c075b2b" Feb 04 11:44:10 crc kubenswrapper[4728]: I0204 11:44:10.641663 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dl677q" Feb 04 11:44:11 crc kubenswrapper[4728]: I0204 11:44:11.173680 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-67db8bbf87-ffl8t" Feb 04 11:44:25 crc kubenswrapper[4728]: I0204 11:44:25.848792 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vmchb"] Feb 04 11:44:25 crc kubenswrapper[4728]: I0204 11:44:25.850512 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" Feb 04 11:44:25 crc kubenswrapper[4728]: I0204 11:44:25.853642 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-p4ln7" Feb 04 11:44:25 crc kubenswrapper[4728]: I0204 11:44:25.854008 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 04 11:44:25 crc kubenswrapper[4728]: I0204 11:44:25.854249 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 04 11:44:25 crc kubenswrapper[4728]: I0204 11:44:25.856871 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 04 11:44:25 crc kubenswrapper[4728]: I0204 11:44:25.859521 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vmchb"] Feb 04 11:44:25 crc kubenswrapper[4728]: I0204 11:44:25.907342 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-94r5b"] Feb 04 11:44:25 crc kubenswrapper[4728]: I0204 11:44:25.908372 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:25 crc kubenswrapper[4728]: I0204 11:44:25.910981 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 04 11:44:25 crc kubenswrapper[4728]: I0204 11:44:25.919584 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-94r5b"] Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.000825 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmr9\" (UniqueName: \"kubernetes.io/projected/0c1dbf48-7435-45b7-81c5-b6e471e747b2-kube-api-access-wpmr9\") pod \"dnsmasq-dns-78dd6ddcc-94r5b\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.000890 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gxk9\" (UniqueName: \"kubernetes.io/projected/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-kube-api-access-4gxk9\") pod \"dnsmasq-dns-675f4bcbfc-vmchb\" (UID: \"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.000973 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-config\") pod \"dnsmasq-dns-78dd6ddcc-94r5b\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.001172 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-config\") pod \"dnsmasq-dns-675f4bcbfc-vmchb\" (UID: \"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.001220 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-94r5b\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.102211 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-config\") pod \"dnsmasq-dns-675f4bcbfc-vmchb\" (UID: \"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.102271 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-94r5b\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.102302 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpmr9\" (UniqueName: \"kubernetes.io/projected/0c1dbf48-7435-45b7-81c5-b6e471e747b2-kube-api-access-wpmr9\") pod \"dnsmasq-dns-78dd6ddcc-94r5b\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.102345 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gxk9\" (UniqueName: \"kubernetes.io/projected/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-kube-api-access-4gxk9\") pod \"dnsmasq-dns-675f4bcbfc-vmchb\" (UID: \"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.102393 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-config\") pod \"dnsmasq-dns-78dd6ddcc-94r5b\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.104066 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-config\") pod \"dnsmasq-dns-675f4bcbfc-vmchb\" (UID: \"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.104063 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-94r5b\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.104113 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-config\") pod \"dnsmasq-dns-78dd6ddcc-94r5b\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.122678 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpmr9\" (UniqueName: \"kubernetes.io/projected/0c1dbf48-7435-45b7-81c5-b6e471e747b2-kube-api-access-wpmr9\") pod \"dnsmasq-dns-78dd6ddcc-94r5b\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.124485 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gxk9\" (UniqueName: \"kubernetes.io/projected/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-kube-api-access-4gxk9\") pod \"dnsmasq-dns-675f4bcbfc-vmchb\" (UID: \"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.166435 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.225095 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.580984 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vmchb"] Feb 04 11:44:26 crc kubenswrapper[4728]: I0204 11:44:26.670447 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-94r5b"] Feb 04 11:44:26 crc kubenswrapper[4728]: W0204 11:44:26.675521 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c1dbf48_7435_45b7_81c5_b6e471e747b2.slice/crio-65fdc9a367d6283e266b337ba71e1765c10783534d83ed9b4abab8e77e450714 WatchSource:0}: Error finding container 65fdc9a367d6283e266b337ba71e1765c10783534d83ed9b4abab8e77e450714: Status 404 returned error can't find the container with id 65fdc9a367d6283e266b337ba71e1765c10783534d83ed9b4abab8e77e450714 Feb 04 11:44:27 crc kubenswrapper[4728]: I0204 11:44:27.532109 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" event={"ID":"0c1dbf48-7435-45b7-81c5-b6e471e747b2","Type":"ContainerStarted","Data":"65fdc9a367d6283e266b337ba71e1765c10783534d83ed9b4abab8e77e450714"} Feb 04 11:44:27 crc kubenswrapper[4728]: I0204 11:44:27.535066 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" event={"ID":"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd","Type":"ContainerStarted","Data":"40d3a85d0273e877d39660ea1651fa00d4b5edb27aa37f1a113f404fa8e1338a"} Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.652357 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vmchb"] Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.670377 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4rnz8"] Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.671825 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.683598 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4rnz8"] Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.739149 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-config\") pod \"dnsmasq-dns-666b6646f7-4rnz8\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.739199 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sftck\" (UniqueName: \"kubernetes.io/projected/4112a7f1-40a4-4a17-890d-7f9c48ea547f-kube-api-access-sftck\") pod \"dnsmasq-dns-666b6646f7-4rnz8\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.739258 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4rnz8\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.841046 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-config\") pod \"dnsmasq-dns-666b6646f7-4rnz8\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.841395 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sftck\" (UniqueName: \"kubernetes.io/projected/4112a7f1-40a4-4a17-890d-7f9c48ea547f-kube-api-access-sftck\") pod \"dnsmasq-dns-666b6646f7-4rnz8\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.841585 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4rnz8\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.843002 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4rnz8\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.843117 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-config\") pod \"dnsmasq-dns-666b6646f7-4rnz8\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.863669 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sftck\" (UniqueName: \"kubernetes.io/projected/4112a7f1-40a4-4a17-890d-7f9c48ea547f-kube-api-access-sftck\") pod \"dnsmasq-dns-666b6646f7-4rnz8\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.925931 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-94r5b"] Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.956047 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wn8z"] Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.958092 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:28 crc kubenswrapper[4728]: I0204 11:44:28.963246 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wn8z"] Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.006480 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.044159 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sbls\" (UniqueName: \"kubernetes.io/projected/2f99686a-5920-4efd-8352-de765ac16f39-kube-api-access-5sbls\") pod \"dnsmasq-dns-57d769cc4f-5wn8z\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.044222 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5wn8z\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.044276 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-config\") pod \"dnsmasq-dns-57d769cc4f-5wn8z\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.145723 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-config\") pod \"dnsmasq-dns-57d769cc4f-5wn8z\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.145831 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sbls\" (UniqueName: \"kubernetes.io/projected/2f99686a-5920-4efd-8352-de765ac16f39-kube-api-access-5sbls\") pod \"dnsmasq-dns-57d769cc4f-5wn8z\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.145862 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5wn8z\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.146778 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5wn8z\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.147425 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-config\") pod \"dnsmasq-dns-57d769cc4f-5wn8z\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.172142 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sbls\" (UniqueName: \"kubernetes.io/projected/2f99686a-5920-4efd-8352-de765ac16f39-kube-api-access-5sbls\") pod \"dnsmasq-dns-57d769cc4f-5wn8z\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.279233 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.552455 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4rnz8"] Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.827601 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.828985 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.831778 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.832052 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.832095 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.832210 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.832060 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.832333 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wgqx5" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.836517 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.860124 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.960429 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.960473 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.960530 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.960546 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.960584 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xj26\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-kube-api-access-8xj26\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.960604 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.960697 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.960799 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.960839 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.960865 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:29 crc kubenswrapper[4728]: I0204 11:44:29.960965 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.062427 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.062494 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.062566 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xj26\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-kube-api-access-8xj26\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.062600 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.062623 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.062671 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.062695 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.062741 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.062805 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.062859 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.062890 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.063348 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.064196 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.065897 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.066748 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.066801 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.067202 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.071083 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.096660 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.097159 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.102791 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.109586 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xj26\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-kube-api-access-8xj26\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.144186 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.179180 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.180529 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.184505 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.184745 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.184925 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6xmcq" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.185094 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.185678 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.187356 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.187675 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.187711 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.187675 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.290500 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.290550 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23b1eaab-360d-4438-b68d-0d61f21ff593-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.290595 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.290625 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23b1eaab-360d-4438-b68d-0d61f21ff593-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.290649 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.290680 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.290997 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.291182 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.291288 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.291481 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.291582 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hfmt\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-kube-api-access-4hfmt\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.393575 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.393632 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.393664 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hfmt\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-kube-api-access-4hfmt\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.393700 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.393721 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23b1eaab-360d-4438-b68d-0d61f21ff593-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.393766 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.393784 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23b1eaab-360d-4438-b68d-0d61f21ff593-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.393821 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.393842 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.393867 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.393893 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.394972 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.395481 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.396718 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.396768 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.396818 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.397431 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.399406 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.399869 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23b1eaab-360d-4438-b68d-0d61f21ff593-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.404630 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.406765 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23b1eaab-360d-4438-b68d-0d61f21ff593-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.414534 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hfmt\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-kube-api-access-4hfmt\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.419198 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:30 crc kubenswrapper[4728]: I0204 11:44:30.511433 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.279325 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.281601 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.283279 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.285529 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-f72bd" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.285982 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.286481 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.289399 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.297452 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.407216 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da6f384a-b651-4e8c-b17b-355d35b4e5a8-kolla-config\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.407257 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/da6f384a-b651-4e8c-b17b-355d35b4e5a8-config-data-default\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.407286 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6f384a-b651-4e8c-b17b-355d35b4e5a8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.407455 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txj4m\" (UniqueName: \"kubernetes.io/projected/da6f384a-b651-4e8c-b17b-355d35b4e5a8-kube-api-access-txj4m\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.407588 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da6f384a-b651-4e8c-b17b-355d35b4e5a8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.407700 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.407796 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6f384a-b651-4e8c-b17b-355d35b4e5a8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.407847 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/da6f384a-b651-4e8c-b17b-355d35b4e5a8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.508862 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da6f384a-b651-4e8c-b17b-355d35b4e5a8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.508929 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.508970 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6f384a-b651-4e8c-b17b-355d35b4e5a8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.509005 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/da6f384a-b651-4e8c-b17b-355d35b4e5a8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.509036 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da6f384a-b651-4e8c-b17b-355d35b4e5a8-kolla-config\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.509056 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/da6f384a-b651-4e8c-b17b-355d35b4e5a8-config-data-default\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.509091 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6f384a-b651-4e8c-b17b-355d35b4e5a8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.509127 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txj4m\" (UniqueName: \"kubernetes.io/projected/da6f384a-b651-4e8c-b17b-355d35b4e5a8-kube-api-access-txj4m\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.510992 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.511439 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/da6f384a-b651-4e8c-b17b-355d35b4e5a8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.513077 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.513398 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.521774 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da6f384a-b651-4e8c-b17b-355d35b4e5a8-kolla-config\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.522305 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/da6f384a-b651-4e8c-b17b-355d35b4e5a8-config-data-default\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.522963 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da6f384a-b651-4e8c-b17b-355d35b4e5a8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.523590 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.526208 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.528951 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.532940 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txj4m\" (UniqueName: \"kubernetes.io/projected/da6f384a-b651-4e8c-b17b-355d35b4e5a8-kube-api-access-txj4m\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.535504 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6f384a-b651-4e8c-b17b-355d35b4e5a8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.535576 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6f384a-b651-4e8c-b17b-355d35b4e5a8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"da6f384a-b651-4e8c-b17b-355d35b4e5a8\") " pod="openstack/openstack-galera-0" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.616576 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-f72bd" Feb 04 11:44:31 crc kubenswrapper[4728]: I0204 11:44:31.620894 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: W0204 11:44:32.655867 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4112a7f1_40a4_4a17_890d_7f9c48ea547f.slice/crio-983d3867e36adeb78e25c6b0e3fd8ca74fcef6af9704c7c75c91ad87ce01cf30 WatchSource:0}: Error finding container 983d3867e36adeb78e25c6b0e3fd8ca74fcef6af9704c7c75c91ad87ce01cf30: Status 404 returned error can't find the container with id 983d3867e36adeb78e25c6b0e3fd8ca74fcef6af9704c7c75c91ad87ce01cf30 Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.660868 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.694395 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.695586 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.698002 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.698237 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.698390 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.698541 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xv6xt" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.716502 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.830881 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6e91a91-91b5-4617-9ba2-16e77e144334-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.830936 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a6e91a91-91b5-4617-9ba2-16e77e144334-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.830955 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7sm6\" (UniqueName: \"kubernetes.io/projected/a6e91a91-91b5-4617-9ba2-16e77e144334-kube-api-access-t7sm6\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.831004 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6e91a91-91b5-4617-9ba2-16e77e144334-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.831026 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6e91a91-91b5-4617-9ba2-16e77e144334-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.831045 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e91a91-91b5-4617-9ba2-16e77e144334-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.831077 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a6e91a91-91b5-4617-9ba2-16e77e144334-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.831107 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.933105 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a6e91a91-91b5-4617-9ba2-16e77e144334-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.933489 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.933520 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6e91a91-91b5-4617-9ba2-16e77e144334-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.933562 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a6e91a91-91b5-4617-9ba2-16e77e144334-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.933591 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7sm6\" (UniqueName: \"kubernetes.io/projected/a6e91a91-91b5-4617-9ba2-16e77e144334-kube-api-access-t7sm6\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.933653 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6e91a91-91b5-4617-9ba2-16e77e144334-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.933690 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6e91a91-91b5-4617-9ba2-16e77e144334-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.933713 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e91a91-91b5-4617-9ba2-16e77e144334-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.934300 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a6e91a91-91b5-4617-9ba2-16e77e144334-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.934593 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a6e91a91-91b5-4617-9ba2-16e77e144334-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.935504 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6e91a91-91b5-4617-9ba2-16e77e144334-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.936487 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6e91a91-91b5-4617-9ba2-16e77e144334-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.934835 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.941172 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e91a91-91b5-4617-9ba2-16e77e144334-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.953531 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7sm6\" (UniqueName: \"kubernetes.io/projected/a6e91a91-91b5-4617-9ba2-16e77e144334-kube-api-access-t7sm6\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.968650 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6e91a91-91b5-4617-9ba2-16e77e144334-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:32 crc kubenswrapper[4728]: I0204 11:44:32.971562 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a6e91a91-91b5-4617-9ba2-16e77e144334\") " pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.005622 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.006482 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.008434 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.011074 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-52kdn" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.011139 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.025123 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.026222 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.136844 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbdvl\" (UniqueName: \"kubernetes.io/projected/f6e14837-5f91-48dd-ab9c-8fad208e9d88-kube-api-access-vbdvl\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.136897 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6e14837-5f91-48dd-ab9c-8fad208e9d88-config-data\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.136930 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e14837-5f91-48dd-ab9c-8fad208e9d88-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.136962 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f6e14837-5f91-48dd-ab9c-8fad208e9d88-kolla-config\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.137022 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e14837-5f91-48dd-ab9c-8fad208e9d88-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.238669 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e14837-5f91-48dd-ab9c-8fad208e9d88-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.238730 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbdvl\" (UniqueName: \"kubernetes.io/projected/f6e14837-5f91-48dd-ab9c-8fad208e9d88-kube-api-access-vbdvl\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.238773 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6e14837-5f91-48dd-ab9c-8fad208e9d88-config-data\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.238799 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e14837-5f91-48dd-ab9c-8fad208e9d88-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.238827 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f6e14837-5f91-48dd-ab9c-8fad208e9d88-kolla-config\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.239411 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f6e14837-5f91-48dd-ab9c-8fad208e9d88-kolla-config\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.240353 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6e14837-5f91-48dd-ab9c-8fad208e9d88-config-data\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.242279 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e14837-5f91-48dd-ab9c-8fad208e9d88-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.243133 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e14837-5f91-48dd-ab9c-8fad208e9d88-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.261626 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbdvl\" (UniqueName: \"kubernetes.io/projected/f6e14837-5f91-48dd-ab9c-8fad208e9d88-kube-api-access-vbdvl\") pod \"memcached-0\" (UID: \"f6e14837-5f91-48dd-ab9c-8fad208e9d88\") " pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.335238 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 04 11:44:33 crc kubenswrapper[4728]: I0204 11:44:33.601979 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" event={"ID":"4112a7f1-40a4-4a17-890d-7f9c48ea547f","Type":"ContainerStarted","Data":"983d3867e36adeb78e25c6b0e3fd8ca74fcef6af9704c7c75c91ad87ce01cf30"} Feb 04 11:44:34 crc kubenswrapper[4728]: I0204 11:44:34.956941 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 11:44:34 crc kubenswrapper[4728]: I0204 11:44:34.958079 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 11:44:34 crc kubenswrapper[4728]: I0204 11:44:34.959795 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-ngcbk" Feb 04 11:44:34 crc kubenswrapper[4728]: I0204 11:44:34.972612 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 11:44:35 crc kubenswrapper[4728]: I0204 11:44:35.069423 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-586p4\" (UniqueName: \"kubernetes.io/projected/f36f4b27-e48a-40a7-9179-9ad5146a1ce7-kube-api-access-586p4\") pod \"kube-state-metrics-0\" (UID: \"f36f4b27-e48a-40a7-9179-9ad5146a1ce7\") " pod="openstack/kube-state-metrics-0" Feb 04 11:44:35 crc kubenswrapper[4728]: I0204 11:44:35.170494 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-586p4\" (UniqueName: \"kubernetes.io/projected/f36f4b27-e48a-40a7-9179-9ad5146a1ce7-kube-api-access-586p4\") pod \"kube-state-metrics-0\" (UID: \"f36f4b27-e48a-40a7-9179-9ad5146a1ce7\") " pod="openstack/kube-state-metrics-0" Feb 04 11:44:35 crc kubenswrapper[4728]: I0204 11:44:35.189603 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-586p4\" (UniqueName: \"kubernetes.io/projected/f36f4b27-e48a-40a7-9179-9ad5146a1ce7-kube-api-access-586p4\") pod \"kube-state-metrics-0\" (UID: \"f36f4b27-e48a-40a7-9179-9ad5146a1ce7\") " pod="openstack/kube-state-metrics-0" Feb 04 11:44:35 crc kubenswrapper[4728]: I0204 11:44:35.283820 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.301561 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q2pd5"] Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.303764 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.308517 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9dfbw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.311401 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-mf6rw"] Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.312498 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.312720 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.313311 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.322068 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q2pd5"] Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.340400 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mf6rw"] Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.424490 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-var-run\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.424773 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7c1adf-4c02-42b4-997d-291a7d033983-ovn-controller-tls-certs\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.424867 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-etc-ovs\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.424953 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvjr\" (UniqueName: \"kubernetes.io/projected/faf427b7-1198-4ca8-9873-dc531a2bc572-kube-api-access-9nvjr\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.425045 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6c7c1adf-4c02-42b4-997d-291a7d033983-var-run\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.425132 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-var-lib\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.425239 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-var-log\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.425336 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c7c1adf-4c02-42b4-997d-291a7d033983-scripts\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.425419 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7c1adf-4c02-42b4-997d-291a7d033983-combined-ca-bundle\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.425498 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6c7c1adf-4c02-42b4-997d-291a7d033983-var-log-ovn\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.425580 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c7c1adf-4c02-42b4-997d-291a7d033983-var-run-ovn\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.425717 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmdh\" (UniqueName: \"kubernetes.io/projected/6c7c1adf-4c02-42b4-997d-291a7d033983-kube-api-access-cgmdh\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.425776 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/faf427b7-1198-4ca8-9873-dc531a2bc572-scripts\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527313 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgmdh\" (UniqueName: \"kubernetes.io/projected/6c7c1adf-4c02-42b4-997d-291a7d033983-kube-api-access-cgmdh\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527366 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/faf427b7-1198-4ca8-9873-dc531a2bc572-scripts\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527442 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-var-run\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527479 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7c1adf-4c02-42b4-997d-291a7d033983-ovn-controller-tls-certs\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527505 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-etc-ovs\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527527 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvjr\" (UniqueName: \"kubernetes.io/projected/faf427b7-1198-4ca8-9873-dc531a2bc572-kube-api-access-9nvjr\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527556 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6c7c1adf-4c02-42b4-997d-291a7d033983-var-run\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527582 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-var-lib\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527605 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-var-log\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527637 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c7c1adf-4c02-42b4-997d-291a7d033983-scripts\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527657 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7c1adf-4c02-42b4-997d-291a7d033983-combined-ca-bundle\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527682 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6c7c1adf-4c02-42b4-997d-291a7d033983-var-log-ovn\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.527713 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c7c1adf-4c02-42b4-997d-291a7d033983-var-run-ovn\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.528283 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-var-log\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.528344 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c7c1adf-4c02-42b4-997d-291a7d033983-var-run-ovn\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.528441 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6c7c1adf-4c02-42b4-997d-291a7d033983-var-log-ovn\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.528452 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-var-lib\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.528740 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-var-run\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.528745 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6c7c1adf-4c02-42b4-997d-291a7d033983-var-run\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.528794 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/faf427b7-1198-4ca8-9873-dc531a2bc572-etc-ovs\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.531100 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c7c1adf-4c02-42b4-997d-291a7d033983-scripts\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.531262 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/faf427b7-1198-4ca8-9873-dc531a2bc572-scripts\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.538722 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7c1adf-4c02-42b4-997d-291a7d033983-combined-ca-bundle\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.544011 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7c1adf-4c02-42b4-997d-291a7d033983-ovn-controller-tls-certs\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.547673 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgmdh\" (UniqueName: \"kubernetes.io/projected/6c7c1adf-4c02-42b4-997d-291a7d033983-kube-api-access-cgmdh\") pod \"ovn-controller-q2pd5\" (UID: \"6c7c1adf-4c02-42b4-997d-291a7d033983\") " pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.551541 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvjr\" (UniqueName: \"kubernetes.io/projected/faf427b7-1198-4ca8-9873-dc531a2bc572-kube-api-access-9nvjr\") pod \"ovn-controller-ovs-mf6rw\" (UID: \"faf427b7-1198-4ca8-9873-dc531a2bc572\") " pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.627466 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:38 crc kubenswrapper[4728]: I0204 11:44:38.634847 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.189895 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.191278 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.194117 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-f5pd8" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.194307 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.194448 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.194781 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.195182 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.196715 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.340876 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.341020 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np7kt\" (UniqueName: \"kubernetes.io/projected/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-kube-api-access-np7kt\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.341045 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.341097 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.341217 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.341263 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.341300 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-config\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.341320 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.443093 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np7kt\" (UniqueName: \"kubernetes.io/projected/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-kube-api-access-np7kt\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.443151 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.443197 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.443270 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.443308 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.443353 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-config\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.443378 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.443398 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.443639 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.443701 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.444657 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-config\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.445463 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.449574 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.451629 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.458240 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.461944 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np7kt\" (UniqueName: \"kubernetes.io/projected/1f594403-9f70-4fa5-81ef-3b0e5f5d98e4-kube-api-access-np7kt\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.466148 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4\") " pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:39 crc kubenswrapper[4728]: I0204 11:44:39.562252 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 04 11:44:41 crc kubenswrapper[4728]: E0204 11:44:41.135823 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 04 11:44:41 crc kubenswrapper[4728]: E0204 11:44:41.138224 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpmr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-94r5b_openstack(0c1dbf48-7435-45b7-81c5-b6e471e747b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 11:44:41 crc kubenswrapper[4728]: E0204 11:44:41.139468 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" podUID="0c1dbf48-7435-45b7-81c5-b6e471e747b2" Feb 04 11:44:41 crc kubenswrapper[4728]: I0204 11:44:41.540811 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 04 11:44:41 crc kubenswrapper[4728]: W0204 11:44:41.558877 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e14837_5f91_48dd_ab9c_8fad208e9d88.slice/crio-56938f93e5d6c473f4ef39faa64ee9bde4e3685ebadfdf7e3207dc22d026567c WatchSource:0}: Error finding container 56938f93e5d6c473f4ef39faa64ee9bde4e3685ebadfdf7e3207dc22d026567c: Status 404 returned error can't find the container with id 56938f93e5d6c473f4ef39faa64ee9bde4e3685ebadfdf7e3207dc22d026567c Feb 04 11:44:41 crc kubenswrapper[4728]: I0204 11:44:41.678470 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" event={"ID":"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd","Type":"ContainerStarted","Data":"c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820"} Feb 04 11:44:41 crc kubenswrapper[4728]: I0204 11:44:41.678514 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" podUID="77030b61-c5d6-4ea3-86b2-63d02c9d5ffd" containerName="init" containerID="cri-o://c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820" gracePeriod=10 Feb 04 11:44:41 crc kubenswrapper[4728]: I0204 11:44:41.681778 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f6e14837-5f91-48dd-ab9c-8fad208e9d88","Type":"ContainerStarted","Data":"56938f93e5d6c473f4ef39faa64ee9bde4e3685ebadfdf7e3207dc22d026567c"} Feb 04 11:44:41 crc kubenswrapper[4728]: I0204 11:44:41.885976 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wn8z"] Feb 04 11:44:41 crc kubenswrapper[4728]: W0204 11:44:41.910378 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23b1eaab_360d_4438_b68d_0d61f21ff593.slice/crio-2e5b77a4dbe2535e67cfab0a20d67cc9d3079e56b0f1b6921f667985aefb2f7d WatchSource:0}: Error finding container 2e5b77a4dbe2535e67cfab0a20d67cc9d3079e56b0f1b6921f667985aefb2f7d: Status 404 returned error can't find the container with id 2e5b77a4dbe2535e67cfab0a20d67cc9d3079e56b0f1b6921f667985aefb2f7d Feb 04 11:44:41 crc kubenswrapper[4728]: I0204 11:44:41.910708 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 11:44:41 crc kubenswrapper[4728]: I0204 11:44:41.923365 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 11:44:41 crc kubenswrapper[4728]: W0204 11:44:41.927669 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf36f4b27_e48a_40a7_9179_9ad5146a1ce7.slice/crio-27440b069b1eee629c292e8e2e4898de898aaff72f4e65dd94a08b9195dcdca8 WatchSource:0}: Error finding container 27440b069b1eee629c292e8e2e4898de898aaff72f4e65dd94a08b9195dcdca8: Status 404 returned error can't find the container with id 27440b069b1eee629c292e8e2e4898de898aaff72f4e65dd94a08b9195dcdca8 Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.002558 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 04 11:44:42 crc kubenswrapper[4728]: W0204 11:44:42.007092 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f594403_9f70_4fa5_81ef_3b0e5f5d98e4.slice/crio-a8971fd9423600a156393b449f899acd0ebf38e9a6c8a7ea847972647700e0bd WatchSource:0}: Error finding container a8971fd9423600a156393b449f899acd0ebf38e9a6c8a7ea847972647700e0bd: Status 404 returned error can't find the container with id a8971fd9423600a156393b449f899acd0ebf38e9a6c8a7ea847972647700e0bd Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.088547 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.094561 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.166088 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.202038 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.218157 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.262259 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mf6rw"] Feb 04 11:44:42 crc kubenswrapper[4728]: W0204 11:44:42.270841 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf427b7_1198_4ca8_9873_dc531a2bc572.slice/crio-870f3de6c0180b35c1cda11ae8fbee39cea37b709f535a4b2417e3729b96022d WatchSource:0}: Error finding container 870f3de6c0180b35c1cda11ae8fbee39cea37b709f535a4b2417e3729b96022d: Status 404 returned error can't find the container with id 870f3de6c0180b35c1cda11ae8fbee39cea37b709f535a4b2417e3729b96022d Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.292152 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-config\") pod \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.292218 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpmr9\" (UniqueName: \"kubernetes.io/projected/0c1dbf48-7435-45b7-81c5-b6e471e747b2-kube-api-access-wpmr9\") pod \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.292265 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gxk9\" (UniqueName: \"kubernetes.io/projected/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-kube-api-access-4gxk9\") pod \"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd\" (UID: \"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd\") " Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.292365 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-config\") pod \"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd\" (UID: \"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd\") " Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.292412 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-dns-svc\") pod \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\" (UID: \"0c1dbf48-7435-45b7-81c5-b6e471e747b2\") " Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.292909 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-config" (OuterVolumeSpecName: "config") pod "0c1dbf48-7435-45b7-81c5-b6e471e747b2" (UID: "0c1dbf48-7435-45b7-81c5-b6e471e747b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.292971 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c1dbf48-7435-45b7-81c5-b6e471e747b2" (UID: "0c1dbf48-7435-45b7-81c5-b6e471e747b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.297311 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-kube-api-access-4gxk9" (OuterVolumeSpecName: "kube-api-access-4gxk9") pod "77030b61-c5d6-4ea3-86b2-63d02c9d5ffd" (UID: "77030b61-c5d6-4ea3-86b2-63d02c9d5ffd"). InnerVolumeSpecName "kube-api-access-4gxk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.298175 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c1dbf48-7435-45b7-81c5-b6e471e747b2-kube-api-access-wpmr9" (OuterVolumeSpecName: "kube-api-access-wpmr9") pod "0c1dbf48-7435-45b7-81c5-b6e471e747b2" (UID: "0c1dbf48-7435-45b7-81c5-b6e471e747b2"). InnerVolumeSpecName "kube-api-access-wpmr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.311712 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-config" (OuterVolumeSpecName: "config") pod "77030b61-c5d6-4ea3-86b2-63d02c9d5ffd" (UID: "77030b61-c5d6-4ea3-86b2-63d02c9d5ffd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.349288 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q2pd5"] Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.394168 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.394694 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.394705 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1dbf48-7435-45b7-81c5-b6e471e747b2-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.394719 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpmr9\" (UniqueName: \"kubernetes.io/projected/0c1dbf48-7435-45b7-81c5-b6e471e747b2-kube-api-access-wpmr9\") on node \"crc\" DevicePath \"\"" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.394728 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gxk9\" (UniqueName: \"kubernetes.io/projected/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd-kube-api-access-4gxk9\") on node \"crc\" DevicePath \"\"" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.630812 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 04 11:44:42 crc kubenswrapper[4728]: E0204 11:44:42.631150 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77030b61-c5d6-4ea3-86b2-63d02c9d5ffd" containerName="init" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.631174 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="77030b61-c5d6-4ea3-86b2-63d02c9d5ffd" containerName="init" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.631433 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="77030b61-c5d6-4ea3-86b2-63d02c9d5ffd" containerName="init" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.632409 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.637378 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.637990 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.638202 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.638481 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8b94g" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.645314 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.728648 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.728972 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-94r5b" event={"ID":"0c1dbf48-7435-45b7-81c5-b6e471e747b2","Type":"ContainerDied","Data":"65fdc9a367d6283e266b337ba71e1765c10783534d83ed9b4abab8e77e450714"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.730910 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23b1eaab-360d-4438-b68d-0d61f21ff593","Type":"ContainerStarted","Data":"2e5b77a4dbe2535e67cfab0a20d67cc9d3079e56b0f1b6921f667985aefb2f7d"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.734219 4728 generic.go:334] "Generic (PLEG): container finished" podID="4112a7f1-40a4-4a17-890d-7f9c48ea547f" containerID="33f393d1273d7e9ad3b84c7336d1417e9a4e8fad756f35bfdbf7e0c12bb05037" exitCode=0 Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.734256 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" event={"ID":"4112a7f1-40a4-4a17-890d-7f9c48ea547f","Type":"ContainerDied","Data":"33f393d1273d7e9ad3b84c7336d1417e9a4e8fad756f35bfdbf7e0c12bb05037"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.737513 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"da6f384a-b651-4e8c-b17b-355d35b4e5a8","Type":"ContainerStarted","Data":"49e715331c31908029581d79825a7665b84aa7c06bd2a14648392f30fb0e2530"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.745286 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f36f4b27-e48a-40a7-9179-9ad5146a1ce7","Type":"ContainerStarted","Data":"27440b069b1eee629c292e8e2e4898de898aaff72f4e65dd94a08b9195dcdca8"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.747277 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a6e91a91-91b5-4617-9ba2-16e77e144334","Type":"ContainerStarted","Data":"cb991e78adee40e8076d512a27a94194be3970abc2e1d4e3961436236db7fad0"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.757294 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c","Type":"ContainerStarted","Data":"ed8402716efb5f5e69156139a9189410ff61978a431618b31a5180568a99a271"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.760846 4728 generic.go:334] "Generic (PLEG): container finished" podID="2f99686a-5920-4efd-8352-de765ac16f39" containerID="d0d64249ec06aa7f07f5788c4c9ee59d78a6080b6995531ad87eae26445ef124" exitCode=0 Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.760925 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" event={"ID":"2f99686a-5920-4efd-8352-de765ac16f39","Type":"ContainerDied","Data":"d0d64249ec06aa7f07f5788c4c9ee59d78a6080b6995531ad87eae26445ef124"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.760955 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" event={"ID":"2f99686a-5920-4efd-8352-de765ac16f39","Type":"ContainerStarted","Data":"12ad8e6adcdd70daa259982e38672423258f55a9af21cf32a819168c8cf7d907"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.762777 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mf6rw" event={"ID":"faf427b7-1198-4ca8-9873-dc531a2bc572","Type":"ContainerStarted","Data":"870f3de6c0180b35c1cda11ae8fbee39cea37b709f535a4b2417e3729b96022d"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.765703 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4","Type":"ContainerStarted","Data":"a8971fd9423600a156393b449f899acd0ebf38e9a6c8a7ea847972647700e0bd"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.767244 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q2pd5" event={"ID":"6c7c1adf-4c02-42b4-997d-291a7d033983","Type":"ContainerStarted","Data":"6aa6c7a9094eb33508bdcdb75187d23415233cfaa16fdb7991ee03af2f274011"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.770567 4728 generic.go:334] "Generic (PLEG): container finished" podID="77030b61-c5d6-4ea3-86b2-63d02c9d5ffd" containerID="c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820" exitCode=0 Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.770608 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" event={"ID":"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd","Type":"ContainerDied","Data":"c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.770633 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" event={"ID":"77030b61-c5d6-4ea3-86b2-63d02c9d5ffd","Type":"ContainerDied","Data":"40d3a85d0273e877d39660ea1651fa00d4b5edb27aa37f1a113f404fa8e1338a"} Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.770652 4728 scope.go:117] "RemoveContainer" containerID="c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.770799 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vmchb" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.801593 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fe072a1-7563-4a3a-b52f-6dacc6771099-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.801635 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fe072a1-7563-4a3a-b52f-6dacc6771099-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.801660 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddcpx\" (UniqueName: \"kubernetes.io/projected/2fe072a1-7563-4a3a-b52f-6dacc6771099-kube-api-access-ddcpx\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.801691 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fe072a1-7563-4a3a-b52f-6dacc6771099-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.801729 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2fe072a1-7563-4a3a-b52f-6dacc6771099-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.801762 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe072a1-7563-4a3a-b52f-6dacc6771099-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.801839 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fe072a1-7563-4a3a-b52f-6dacc6771099-config\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.801860 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.904941 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-94r5b"] Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.905963 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddcpx\" (UniqueName: \"kubernetes.io/projected/2fe072a1-7563-4a3a-b52f-6dacc6771099-kube-api-access-ddcpx\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.906024 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fe072a1-7563-4a3a-b52f-6dacc6771099-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.906064 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2fe072a1-7563-4a3a-b52f-6dacc6771099-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.906088 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe072a1-7563-4a3a-b52f-6dacc6771099-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.906169 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fe072a1-7563-4a3a-b52f-6dacc6771099-config\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.906190 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.906216 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fe072a1-7563-4a3a-b52f-6dacc6771099-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.906233 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fe072a1-7563-4a3a-b52f-6dacc6771099-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.908355 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fe072a1-7563-4a3a-b52f-6dacc6771099-config\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.908603 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.909377 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2fe072a1-7563-4a3a-b52f-6dacc6771099-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.910254 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fe072a1-7563-4a3a-b52f-6dacc6771099-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.910809 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-94r5b"] Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.917683 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe072a1-7563-4a3a-b52f-6dacc6771099-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.920162 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fe072a1-7563-4a3a-b52f-6dacc6771099-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.934330 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fe072a1-7563-4a3a-b52f-6dacc6771099-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.954363 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddcpx\" (UniqueName: \"kubernetes.io/projected/2fe072a1-7563-4a3a-b52f-6dacc6771099-kube-api-access-ddcpx\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.980476 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vmchb"] Feb 04 11:44:42 crc kubenswrapper[4728]: I0204 11:44:42.994642 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2fe072a1-7563-4a3a-b52f-6dacc6771099\") " pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.012114 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vmchb"] Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.024478 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.563610 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c1dbf48-7435-45b7-81c5-b6e471e747b2" path="/var/lib/kubelet/pods/0c1dbf48-7435-45b7-81c5-b6e471e747b2/volumes" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.564285 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77030b61-c5d6-4ea3-86b2-63d02c9d5ffd" path="/var/lib/kubelet/pods/77030b61-c5d6-4ea3-86b2-63d02c9d5ffd/volumes" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.677995 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-znkl4"] Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.680229 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.683367 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.684664 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-znkl4"] Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.816587 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wn8z"] Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.824558 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00063022-f33f-4668-a588-e2b677acfda1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.824608 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00063022-f33f-4668-a588-e2b677acfda1-config\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.824650 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/00063022-f33f-4668-a588-e2b677acfda1-ovn-rundir\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.824670 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq4kr\" (UniqueName: \"kubernetes.io/projected/00063022-f33f-4668-a588-e2b677acfda1-kube-api-access-gq4kr\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.824713 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/00063022-f33f-4668-a588-e2b677acfda1-ovs-rundir\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.824743 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00063022-f33f-4668-a588-e2b677acfda1-combined-ca-bundle\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.838121 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bms9x"] Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.846257 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.851744 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.860890 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bms9x"] Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.927374 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.927537 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00063022-f33f-4668-a588-e2b677acfda1-combined-ca-bundle\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.927641 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x655w\" (UniqueName: \"kubernetes.io/projected/355651da-79e8-4420-addf-f27c8ec3e9e7-kube-api-access-x655w\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.927793 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.927870 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00063022-f33f-4668-a588-e2b677acfda1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.927922 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00063022-f33f-4668-a588-e2b677acfda1-config\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.927996 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/00063022-f33f-4668-a588-e2b677acfda1-ovn-rundir\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.928024 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq4kr\" (UniqueName: \"kubernetes.io/projected/00063022-f33f-4668-a588-e2b677acfda1-kube-api-access-gq4kr\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.928105 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-config\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.929168 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/00063022-f33f-4668-a588-e2b677acfda1-ovs-rundir\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.929297 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/00063022-f33f-4668-a588-e2b677acfda1-ovs-rundir\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.928568 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/00063022-f33f-4668-a588-e2b677acfda1-ovn-rundir\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.929507 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00063022-f33f-4668-a588-e2b677acfda1-config\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.946795 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00063022-f33f-4668-a588-e2b677acfda1-combined-ca-bundle\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.950009 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq4kr\" (UniqueName: \"kubernetes.io/projected/00063022-f33f-4668-a588-e2b677acfda1-kube-api-access-gq4kr\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:43 crc kubenswrapper[4728]: I0204 11:44:43.953204 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00063022-f33f-4668-a588-e2b677acfda1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-znkl4\" (UID: \"00063022-f33f-4668-a588-e2b677acfda1\") " pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.027677 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-znkl4" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.029813 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-config\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.029847 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.029882 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x655w\" (UniqueName: \"kubernetes.io/projected/355651da-79e8-4420-addf-f27c8ec3e9e7-kube-api-access-x655w\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.029922 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.030633 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.030792 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.030881 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-config\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.053193 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x655w\" (UniqueName: \"kubernetes.io/projected/355651da-79e8-4420-addf-f27c8ec3e9e7-kube-api-access-x655w\") pod \"dnsmasq-dns-7fd796d7df-bms9x\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.173149 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.583266 4728 scope.go:117] "RemoveContainer" containerID="c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820" Feb 04 11:44:44 crc kubenswrapper[4728]: E0204 11:44:44.584261 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820\": container with ID starting with c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820 not found: ID does not exist" containerID="c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820" Feb 04 11:44:44 crc kubenswrapper[4728]: I0204 11:44:44.584290 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820"} err="failed to get container status \"c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820\": rpc error: code = NotFound desc = could not find container \"c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820\": container with ID starting with c64900a4ac6c3a62cf872d98ca9dea3df391a74a96a11cf642b4b1d736803820 not found: ID does not exist" Feb 04 11:44:49 crc kubenswrapper[4728]: I0204 11:44:49.330195 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bms9x"] Feb 04 11:44:49 crc kubenswrapper[4728]: I0204 11:44:49.815430 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-znkl4"] Feb 04 11:44:49 crc kubenswrapper[4728]: I0204 11:44:49.828717 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" event={"ID":"355651da-79e8-4420-addf-f27c8ec3e9e7","Type":"ContainerStarted","Data":"a6e9e6a78c9c8f3ad7224d148127aab728a0930bb54bae2dbd7b46ea9c44c356"} Feb 04 11:44:49 crc kubenswrapper[4728]: W0204 11:44:49.942126 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00063022_f33f_4668_a588_e2b677acfda1.slice/crio-43ad4ad6830dd885694a7d078af865874752bc6853f0345dbb31da13ad3b62a9 WatchSource:0}: Error finding container 43ad4ad6830dd885694a7d078af865874752bc6853f0345dbb31da13ad3b62a9: Status 404 returned error can't find the container with id 43ad4ad6830dd885694a7d078af865874752bc6853f0345dbb31da13ad3b62a9 Feb 04 11:44:49 crc kubenswrapper[4728]: E0204 11:44:49.944100 4728 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 04 11:44:49 crc kubenswrapper[4728]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/4112a7f1-40a4-4a17-890d-7f9c48ea547f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 04 11:44:49 crc kubenswrapper[4728]: > podSandboxID="983d3867e36adeb78e25c6b0e3fd8ca74fcef6af9704c7c75c91ad87ce01cf30" Feb 04 11:44:49 crc kubenswrapper[4728]: E0204 11:44:49.944260 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 04 11:44:49 crc kubenswrapper[4728]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sftck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-4rnz8_openstack(4112a7f1-40a4-4a17-890d-7f9c48ea547f): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/4112a7f1-40a4-4a17-890d-7f9c48ea547f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 04 11:44:49 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 04 11:44:49 crc kubenswrapper[4728]: E0204 11:44:49.946835 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/4112a7f1-40a4-4a17-890d-7f9c48ea547f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" podUID="4112a7f1-40a4-4a17-890d-7f9c48ea547f" Feb 04 11:44:49 crc kubenswrapper[4728]: I0204 11:44:49.992123 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 04 11:44:50 crc kubenswrapper[4728]: W0204 11:44:50.020853 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe072a1_7563_4a3a_b52f_6dacc6771099.slice/crio-ca0dca86e3299ee3d7c7f9378045ed61d425a522311dfdc985d7b8b84f338dce WatchSource:0}: Error finding container ca0dca86e3299ee3d7c7f9378045ed61d425a522311dfdc985d7b8b84f338dce: Status 404 returned error can't find the container with id ca0dca86e3299ee3d7c7f9378045ed61d425a522311dfdc985d7b8b84f338dce Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.840016 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q2pd5" event={"ID":"6c7c1adf-4c02-42b4-997d-291a7d033983","Type":"ContainerStarted","Data":"0c792cecb6487a1667cd2e67a740253d7d64c452d99fb784b2da62d0c6c01618"} Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.841196 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-q2pd5" Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.850424 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2fe072a1-7563-4a3a-b52f-6dacc6771099","Type":"ContainerStarted","Data":"ca0dca86e3299ee3d7c7f9378045ed61d425a522311dfdc985d7b8b84f338dce"} Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.860332 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q2pd5" podStartSLOduration=5.651638337 podStartE2EDuration="12.860314988s" podCreationTimestamp="2026-02-04 11:44:38 +0000 UTC" firstStartedPulling="2026-02-04 11:44:42.36157697 +0000 UTC m=+1031.504281365" lastFinishedPulling="2026-02-04 11:44:49.570253631 +0000 UTC m=+1038.712958016" observedRunningTime="2026-02-04 11:44:50.855041232 +0000 UTC m=+1039.997745627" watchObservedRunningTime="2026-02-04 11:44:50.860314988 +0000 UTC m=+1040.003019373" Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.860725 4728 generic.go:334] "Generic (PLEG): container finished" podID="355651da-79e8-4420-addf-f27c8ec3e9e7" containerID="e830bbb7bccc597ae4fa0552d5e3a3839ce40fcd064e55a0b70c8ded169e9213" exitCode=0 Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.861495 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" event={"ID":"355651da-79e8-4420-addf-f27c8ec3e9e7","Type":"ContainerDied","Data":"e830bbb7bccc597ae4fa0552d5e3a3839ce40fcd064e55a0b70c8ded169e9213"} Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.864379 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"da6f384a-b651-4e8c-b17b-355d35b4e5a8","Type":"ContainerStarted","Data":"3b7a2bea7aa20c90029a71bcaac3d9381dd5839dab098f61217699d3a2c6c1d8"} Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.866493 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4","Type":"ContainerStarted","Data":"5e7f124ef080f050e4b6ff1bf15c6d751187b51fca1db519f6dd66067a9b336a"} Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.868389 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a6e91a91-91b5-4617-9ba2-16e77e144334","Type":"ContainerStarted","Data":"0a261c9410ee5267f7268486c6c03cf27c5016a2ed709dda1a9b67838d5e1705"} Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.870449 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f6e14837-5f91-48dd-ab9c-8fad208e9d88","Type":"ContainerStarted","Data":"ded30e29acb0e3c05f48145f5207f56016260e0362481a563b6356ad29a8609e"} Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.870654 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.872794 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" podUID="2f99686a-5920-4efd-8352-de765ac16f39" containerName="dnsmasq-dns" containerID="cri-o://980663ccdd7930fa193adfce3c1eb8cb9b7201b5bf3fc81a42a9a6575b1331e6" gracePeriod=10 Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.872997 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" event={"ID":"2f99686a-5920-4efd-8352-de765ac16f39","Type":"ContainerStarted","Data":"980663ccdd7930fa193adfce3c1eb8cb9b7201b5bf3fc81a42a9a6575b1331e6"} Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.873032 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.880956 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-znkl4" event={"ID":"00063022-f33f-4668-a588-e2b677acfda1","Type":"ContainerStarted","Data":"43ad4ad6830dd885694a7d078af865874752bc6853f0345dbb31da13ad3b62a9"} Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.901799 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mf6rw" event={"ID":"faf427b7-1198-4ca8-9873-dc531a2bc572","Type":"ContainerStarted","Data":"40975ea58f184b5a044b73a63b66eb201ee6c4a4c36ed59e83667ab210bdafc5"} Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.925054 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.272286119 podStartE2EDuration="18.925028716s" podCreationTimestamp="2026-02-04 11:44:32 +0000 UTC" firstStartedPulling="2026-02-04 11:44:41.571558701 +0000 UTC m=+1030.714263086" lastFinishedPulling="2026-02-04 11:44:49.224301298 +0000 UTC m=+1038.367005683" observedRunningTime="2026-02-04 11:44:50.924271728 +0000 UTC m=+1040.066976113" watchObservedRunningTime="2026-02-04 11:44:50.925028716 +0000 UTC m=+1040.067733091" Feb 04 11:44:50 crc kubenswrapper[4728]: I0204 11:44:50.948321 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" podStartSLOduration=22.948298959 podStartE2EDuration="22.948298959s" podCreationTimestamp="2026-02-04 11:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:44:50.941913197 +0000 UTC m=+1040.084617582" watchObservedRunningTime="2026-02-04 11:44:50.948298959 +0000 UTC m=+1040.091003344" Feb 04 11:44:51 crc kubenswrapper[4728]: I0204 11:44:51.907468 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23b1eaab-360d-4438-b68d-0d61f21ff593","Type":"ContainerStarted","Data":"ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895"} Feb 04 11:44:51 crc kubenswrapper[4728]: I0204 11:44:51.910085 4728 generic.go:334] "Generic (PLEG): container finished" podID="faf427b7-1198-4ca8-9873-dc531a2bc572" containerID="40975ea58f184b5a044b73a63b66eb201ee6c4a4c36ed59e83667ab210bdafc5" exitCode=0 Feb 04 11:44:51 crc kubenswrapper[4728]: I0204 11:44:51.910140 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mf6rw" event={"ID":"faf427b7-1198-4ca8-9873-dc531a2bc572","Type":"ContainerDied","Data":"40975ea58f184b5a044b73a63b66eb201ee6c4a4c36ed59e83667ab210bdafc5"} Feb 04 11:44:51 crc kubenswrapper[4728]: I0204 11:44:51.915409 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c","Type":"ContainerStarted","Data":"e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9"} Feb 04 11:44:51 crc kubenswrapper[4728]: I0204 11:44:51.918274 4728 generic.go:334] "Generic (PLEG): container finished" podID="2f99686a-5920-4efd-8352-de765ac16f39" containerID="980663ccdd7930fa193adfce3c1eb8cb9b7201b5bf3fc81a42a9a6575b1331e6" exitCode=0 Feb 04 11:44:51 crc kubenswrapper[4728]: I0204 11:44:51.918411 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" event={"ID":"2f99686a-5920-4efd-8352-de765ac16f39","Type":"ContainerDied","Data":"980663ccdd7930fa193adfce3c1eb8cb9b7201b5bf3fc81a42a9a6575b1331e6"} Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.363233 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.485597 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-dns-svc\") pod \"2f99686a-5920-4efd-8352-de765ac16f39\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.485699 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-config\") pod \"2f99686a-5920-4efd-8352-de765ac16f39\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.485734 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sbls\" (UniqueName: \"kubernetes.io/projected/2f99686a-5920-4efd-8352-de765ac16f39-kube-api-access-5sbls\") pod \"2f99686a-5920-4efd-8352-de765ac16f39\" (UID: \"2f99686a-5920-4efd-8352-de765ac16f39\") " Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.491302 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f99686a-5920-4efd-8352-de765ac16f39-kube-api-access-5sbls" (OuterVolumeSpecName: "kube-api-access-5sbls") pod "2f99686a-5920-4efd-8352-de765ac16f39" (UID: "2f99686a-5920-4efd-8352-de765ac16f39"). InnerVolumeSpecName "kube-api-access-5sbls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.527630 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f99686a-5920-4efd-8352-de765ac16f39" (UID: "2f99686a-5920-4efd-8352-de765ac16f39"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.534564 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-config" (OuterVolumeSpecName: "config") pod "2f99686a-5920-4efd-8352-de765ac16f39" (UID: "2f99686a-5920-4efd-8352-de765ac16f39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.588190 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.588222 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f99686a-5920-4efd-8352-de765ac16f39-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.588232 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sbls\" (UniqueName: \"kubernetes.io/projected/2f99686a-5920-4efd-8352-de765ac16f39-kube-api-access-5sbls\") on node \"crc\" DevicePath \"\"" Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.928875 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" event={"ID":"2f99686a-5920-4efd-8352-de765ac16f39","Type":"ContainerDied","Data":"12ad8e6adcdd70daa259982e38672423258f55a9af21cf32a819168c8cf7d907"} Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.928930 4728 scope.go:117] "RemoveContainer" containerID="980663ccdd7930fa193adfce3c1eb8cb9b7201b5bf3fc81a42a9a6575b1331e6" Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.929059 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wn8z" Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.964262 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wn8z"] Feb 04 11:44:52 crc kubenswrapper[4728]: I0204 11:44:52.971106 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wn8z"] Feb 04 11:44:53 crc kubenswrapper[4728]: I0204 11:44:53.565291 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f99686a-5920-4efd-8352-de765ac16f39" path="/var/lib/kubelet/pods/2f99686a-5920-4efd-8352-de765ac16f39/volumes" Feb 04 11:44:55 crc kubenswrapper[4728]: I0204 11:44:55.953856 4728 generic.go:334] "Generic (PLEG): container finished" podID="a6e91a91-91b5-4617-9ba2-16e77e144334" containerID="0a261c9410ee5267f7268486c6c03cf27c5016a2ed709dda1a9b67838d5e1705" exitCode=0 Feb 04 11:44:55 crc kubenswrapper[4728]: I0204 11:44:55.953914 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a6e91a91-91b5-4617-9ba2-16e77e144334","Type":"ContainerDied","Data":"0a261c9410ee5267f7268486c6c03cf27c5016a2ed709dda1a9b67838d5e1705"} Feb 04 11:44:56 crc kubenswrapper[4728]: I0204 11:44:56.964414 4728 generic.go:334] "Generic (PLEG): container finished" podID="da6f384a-b651-4e8c-b17b-355d35b4e5a8" containerID="3b7a2bea7aa20c90029a71bcaac3d9381dd5839dab098f61217699d3a2c6c1d8" exitCode=0 Feb 04 11:44:56 crc kubenswrapper[4728]: I0204 11:44:56.964468 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"da6f384a-b651-4e8c-b17b-355d35b4e5a8","Type":"ContainerDied","Data":"3b7a2bea7aa20c90029a71bcaac3d9381dd5839dab098f61217699d3a2c6c1d8"} Feb 04 11:44:58 crc kubenswrapper[4728]: I0204 11:44:58.337375 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 04 11:44:58 crc kubenswrapper[4728]: I0204 11:44:58.736263 4728 scope.go:117] "RemoveContainer" containerID="d0d64249ec06aa7f07f5788c4c9ee59d78a6080b6995531ad87eae26445ef124" Feb 04 11:44:59 crc kubenswrapper[4728]: I0204 11:44:59.993845 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" event={"ID":"355651da-79e8-4420-addf-f27c8ec3e9e7","Type":"ContainerStarted","Data":"4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e"} Feb 04 11:44:59 crc kubenswrapper[4728]: I0204 11:44:59.995058 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:44:59 crc kubenswrapper[4728]: I0204 11:44:59.998059 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"da6f384a-b651-4e8c-b17b-355d35b4e5a8","Type":"ContainerStarted","Data":"a26943bb5251151c5e8426d3e71bc1fde293223c4e0c78a5d40c171c3a35ce60"} Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.002507 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a6e91a91-91b5-4617-9ba2-16e77e144334","Type":"ContainerStarted","Data":"1d9ff5e88a37fbbf83846dcd44eb5b4956a34cbb8f3ab872c6881589dd7e34d5"} Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.007451 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2fe072a1-7563-4a3a-b52f-6dacc6771099","Type":"ContainerStarted","Data":"de64ac2bb43017bff976d6d7b9b2184d4fd7e7c0d727b92512d3ffffc00587be"} Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.007494 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2fe072a1-7563-4a3a-b52f-6dacc6771099","Type":"ContainerStarted","Data":"aec8e3f0a6e1b0aad44ed48cf49de51e0ea9de2b3ed6540a59047b334de804cf"} Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.010270 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-znkl4" event={"ID":"00063022-f33f-4668-a588-e2b677acfda1","Type":"ContainerStarted","Data":"38dee34cc11b8601bed1fc89e3d9f33674b265e7f83c391761bc1f6483325e27"} Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.012872 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" event={"ID":"4112a7f1-40a4-4a17-890d-7f9c48ea547f","Type":"ContainerStarted","Data":"57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec"} Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.013395 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.018899 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mf6rw" event={"ID":"faf427b7-1198-4ca8-9873-dc531a2bc572","Type":"ContainerStarted","Data":"0a632618b40d94a1f5f77ae234360b35bc848cb278bde37aac0960d578d50174"} Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.018971 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mf6rw" event={"ID":"faf427b7-1198-4ca8-9873-dc531a2bc572","Type":"ContainerStarted","Data":"9852913b438e2c7453e3dacbbc4b5979dcaf3db0eaee5598a2a3064f662123e3"} Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.019470 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.019828 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.024363 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" podStartSLOduration=17.024336242 podStartE2EDuration="17.024336242s" podCreationTimestamp="2026-02-04 11:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:45:00.019490237 +0000 UTC m=+1049.162194632" watchObservedRunningTime="2026-02-04 11:45:00.024336242 +0000 UTC m=+1049.167040677" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.025368 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1f594403-9f70-4fa5-81ef-3b0e5f5d98e4","Type":"ContainerStarted","Data":"4364fbf623f768ee50f7cf35aafc44f48e64ebfd16c932fcb5e39846a9c86da1"} Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.027215 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f36f4b27-e48a-40a7-9179-9ad5146a1ce7","Type":"ContainerStarted","Data":"63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631"} Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.027373 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.040772 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" podStartSLOduration=23.430768841 podStartE2EDuration="32.040736672s" podCreationTimestamp="2026-02-04 11:44:28 +0000 UTC" firstStartedPulling="2026-02-04 11:44:32.660634845 +0000 UTC m=+1021.803339220" lastFinishedPulling="2026-02-04 11:44:41.270602666 +0000 UTC m=+1030.413307051" observedRunningTime="2026-02-04 11:45:00.036928781 +0000 UTC m=+1049.179633186" watchObservedRunningTime="2026-02-04 11:45:00.040736672 +0000 UTC m=+1049.183441057" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.065270 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-znkl4" podStartSLOduration=8.193444267 podStartE2EDuration="17.065242784s" podCreationTimestamp="2026-02-04 11:44:43 +0000 UTC" firstStartedPulling="2026-02-04 11:44:49.944114429 +0000 UTC m=+1039.086818834" lastFinishedPulling="2026-02-04 11:44:58.815912956 +0000 UTC m=+1047.958617351" observedRunningTime="2026-02-04 11:45:00.054077919 +0000 UTC m=+1049.196782344" watchObservedRunningTime="2026-02-04 11:45:00.065242784 +0000 UTC m=+1049.207947189" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.079231 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.261495145 podStartE2EDuration="19.079204606s" podCreationTimestamp="2026-02-04 11:44:41 +0000 UTC" firstStartedPulling="2026-02-04 11:44:50.038878822 +0000 UTC m=+1039.181583207" lastFinishedPulling="2026-02-04 11:44:58.856588283 +0000 UTC m=+1047.999292668" observedRunningTime="2026-02-04 11:45:00.074991245 +0000 UTC m=+1049.217695640" watchObservedRunningTime="2026-02-04 11:45:00.079204606 +0000 UTC m=+1049.221909031" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.128960 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-mf6rw" podStartSLOduration=14.93671439 podStartE2EDuration="22.128937128s" podCreationTimestamp="2026-02-04 11:44:38 +0000 UTC" firstStartedPulling="2026-02-04 11:44:42.273896907 +0000 UTC m=+1031.416601292" lastFinishedPulling="2026-02-04 11:44:49.466119645 +0000 UTC m=+1038.608824030" observedRunningTime="2026-02-04 11:45:00.12058963 +0000 UTC m=+1049.263294025" watchObservedRunningTime="2026-02-04 11:45:00.128937128 +0000 UTC m=+1049.271641513" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.156040 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv"] Feb 04 11:45:00 crc kubenswrapper[4728]: E0204 11:45:00.156414 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f99686a-5920-4efd-8352-de765ac16f39" containerName="init" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.156432 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f99686a-5920-4efd-8352-de765ac16f39" containerName="init" Feb 04 11:45:00 crc kubenswrapper[4728]: E0204 11:45:00.156457 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f99686a-5920-4efd-8352-de765ac16f39" containerName="dnsmasq-dns" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.156465 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f99686a-5920-4efd-8352-de765ac16f39" containerName="dnsmasq-dns" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.156630 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f99686a-5920-4efd-8352-de765ac16f39" containerName="dnsmasq-dns" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.157256 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.157418 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.988846188 podStartE2EDuration="29.157393545s" podCreationTimestamp="2026-02-04 11:44:31 +0000 UTC" firstStartedPulling="2026-02-04 11:44:42.099734456 +0000 UTC m=+1031.242438841" lastFinishedPulling="2026-02-04 11:44:49.268281783 +0000 UTC m=+1038.410986198" observedRunningTime="2026-02-04 11:45:00.148586525 +0000 UTC m=+1049.291290910" watchObservedRunningTime="2026-02-04 11:45:00.157393545 +0000 UTC m=+1049.300097940" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.158953 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.159385 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.178607 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv"] Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.184876 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.902524306 podStartE2EDuration="30.184855148s" podCreationTimestamp="2026-02-04 11:44:30 +0000 UTC" firstStartedPulling="2026-02-04 11:44:42.184407699 +0000 UTC m=+1031.327112074" lastFinishedPulling="2026-02-04 11:44:49.466738531 +0000 UTC m=+1038.609442916" observedRunningTime="2026-02-04 11:45:00.176582801 +0000 UTC m=+1049.319287186" watchObservedRunningTime="2026-02-04 11:45:00.184855148 +0000 UTC m=+1049.327559543" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.205971 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.308083601 podStartE2EDuration="26.205953979s" podCreationTimestamp="2026-02-04 11:44:34 +0000 UTC" firstStartedPulling="2026-02-04 11:44:41.929389097 +0000 UTC m=+1031.072093482" lastFinishedPulling="2026-02-04 11:44:58.827259465 +0000 UTC m=+1047.969963860" observedRunningTime="2026-02-04 11:45:00.204888584 +0000 UTC m=+1049.347592969" watchObservedRunningTime="2026-02-04 11:45:00.205953979 +0000 UTC m=+1049.348658364" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.225953 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.420057712 podStartE2EDuration="22.225931094s" podCreationTimestamp="2026-02-04 11:44:38 +0000 UTC" firstStartedPulling="2026-02-04 11:44:42.009952752 +0000 UTC m=+1031.152657127" lastFinishedPulling="2026-02-04 11:44:58.815826134 +0000 UTC m=+1047.958530509" observedRunningTime="2026-02-04 11:45:00.222713628 +0000 UTC m=+1049.365418013" watchObservedRunningTime="2026-02-04 11:45:00.225931094 +0000 UTC m=+1049.368635479" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.315332 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf7aba07-22c7-451d-840c-92f8899544cd-secret-volume\") pod \"collect-profiles-29503425-5w4qv\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.315401 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hccql\" (UniqueName: \"kubernetes.io/projected/bf7aba07-22c7-451d-840c-92f8899544cd-kube-api-access-hccql\") pod \"collect-profiles-29503425-5w4qv\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.315729 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7aba07-22c7-451d-840c-92f8899544cd-config-volume\") pod \"collect-profiles-29503425-5w4qv\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.416739 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7aba07-22c7-451d-840c-92f8899544cd-config-volume\") pod \"collect-profiles-29503425-5w4qv\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.416841 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf7aba07-22c7-451d-840c-92f8899544cd-secret-volume\") pod \"collect-profiles-29503425-5w4qv\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.416882 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hccql\" (UniqueName: \"kubernetes.io/projected/bf7aba07-22c7-451d-840c-92f8899544cd-kube-api-access-hccql\") pod \"collect-profiles-29503425-5w4qv\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.417548 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7aba07-22c7-451d-840c-92f8899544cd-config-volume\") pod \"collect-profiles-29503425-5w4qv\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.422226 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf7aba07-22c7-451d-840c-92f8899544cd-secret-volume\") pod \"collect-profiles-29503425-5w4qv\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.451500 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hccql\" (UniqueName: \"kubernetes.io/projected/bf7aba07-22c7-451d-840c-92f8899544cd-kube-api-access-hccql\") pod \"collect-profiles-29503425-5w4qv\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.464233 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4rnz8"] Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.475640 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.509903 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g64bg"] Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.511502 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.515838 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.537402 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g64bg"] Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.563439 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.620426 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.620481 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.620508 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zttcb\" (UniqueName: \"kubernetes.io/projected/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-kube-api-access-zttcb\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.620532 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.620609 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-config\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.631822 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.725073 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-config\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.725222 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.725261 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.725303 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zttcb\" (UniqueName: \"kubernetes.io/projected/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-kube-api-access-zttcb\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.725332 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.726249 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-config\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.726442 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.726577 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.727156 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.745887 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zttcb\" (UniqueName: \"kubernetes.io/projected/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-kube-api-access-zttcb\") pod \"dnsmasq-dns-86db49b7ff-g64bg\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.871663 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:00 crc kubenswrapper[4728]: I0204 11:45:00.986573 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv"] Feb 04 11:45:01 crc kubenswrapper[4728]: I0204 11:45:01.025528 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 04 11:45:01 crc kubenswrapper[4728]: I0204 11:45:01.043213 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" event={"ID":"bf7aba07-22c7-451d-840c-92f8899544cd","Type":"ContainerStarted","Data":"2c177935ebc59242dc66acbabe21508027359e19228859c419bc0af61bc450c1"} Feb 04 11:45:01 crc kubenswrapper[4728]: I0204 11:45:01.045026 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 04 11:45:01 crc kubenswrapper[4728]: I0204 11:45:01.097229 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 04 11:45:01 crc kubenswrapper[4728]: I0204 11:45:01.316514 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g64bg"] Feb 04 11:45:01 crc kubenswrapper[4728]: W0204 11:45:01.320324 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6aa9cce2_7b79_4637_b5bf_9e126d7ec603.slice/crio-216db3d323f4926c5d06bdaeca842254658e7aae30b1b7cb2dbc30dbdee5ad40 WatchSource:0}: Error finding container 216db3d323f4926c5d06bdaeca842254658e7aae30b1b7cb2dbc30dbdee5ad40: Status 404 returned error can't find the container with id 216db3d323f4926c5d06bdaeca842254658e7aae30b1b7cb2dbc30dbdee5ad40 Feb 04 11:45:01 crc kubenswrapper[4728]: I0204 11:45:01.621683 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 04 11:45:01 crc kubenswrapper[4728]: I0204 11:45:01.621996 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.050808 4728 generic.go:334] "Generic (PLEG): container finished" podID="6aa9cce2-7b79-4637-b5bf-9e126d7ec603" containerID="81bc037ba8ca343055616b8539055270d205d0942ddaf154671b915ba560f7b6" exitCode=0 Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.050921 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" event={"ID":"6aa9cce2-7b79-4637-b5bf-9e126d7ec603","Type":"ContainerDied","Data":"81bc037ba8ca343055616b8539055270d205d0942ddaf154671b915ba560f7b6"} Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.051185 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" event={"ID":"6aa9cce2-7b79-4637-b5bf-9e126d7ec603","Type":"ContainerStarted","Data":"216db3d323f4926c5d06bdaeca842254658e7aae30b1b7cb2dbc30dbdee5ad40"} Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.052942 4728 generic.go:334] "Generic (PLEG): container finished" podID="bf7aba07-22c7-451d-840c-92f8899544cd" containerID="25519d7fdf75943f48aef36004d7b1c9897a1b9149d175fc9c157cbfdbd9887c" exitCode=0 Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.053767 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" event={"ID":"bf7aba07-22c7-451d-840c-92f8899544cd","Type":"ContainerDied","Data":"25519d7fdf75943f48aef36004d7b1c9897a1b9149d175fc9c157cbfdbd9887c"} Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.054593 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" podUID="4112a7f1-40a4-4a17-890d-7f9c48ea547f" containerName="dnsmasq-dns" containerID="cri-o://57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec" gracePeriod=10 Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.564334 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.656253 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sftck\" (UniqueName: \"kubernetes.io/projected/4112a7f1-40a4-4a17-890d-7f9c48ea547f-kube-api-access-sftck\") pod \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.656318 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-config\") pod \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.656504 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-dns-svc\") pod \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\" (UID: \"4112a7f1-40a4-4a17-890d-7f9c48ea547f\") " Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.661479 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4112a7f1-40a4-4a17-890d-7f9c48ea547f-kube-api-access-sftck" (OuterVolumeSpecName: "kube-api-access-sftck") pod "4112a7f1-40a4-4a17-890d-7f9c48ea547f" (UID: "4112a7f1-40a4-4a17-890d-7f9c48ea547f"). InnerVolumeSpecName "kube-api-access-sftck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.694666 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-config" (OuterVolumeSpecName: "config") pod "4112a7f1-40a4-4a17-890d-7f9c48ea547f" (UID: "4112a7f1-40a4-4a17-890d-7f9c48ea547f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.700245 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4112a7f1-40a4-4a17-890d-7f9c48ea547f" (UID: "4112a7f1-40a4-4a17-890d-7f9c48ea547f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.757789 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sftck\" (UniqueName: \"kubernetes.io/projected/4112a7f1-40a4-4a17-890d-7f9c48ea547f-kube-api-access-sftck\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.757822 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:02 crc kubenswrapper[4728]: I0204 11:45:02.757832 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4112a7f1-40a4-4a17-890d-7f9c48ea547f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.025148 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.027410 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.027447 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.060376 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" event={"ID":"6aa9cce2-7b79-4637-b5bf-9e126d7ec603","Type":"ContainerStarted","Data":"0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a"} Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.060530 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.062925 4728 generic.go:334] "Generic (PLEG): container finished" podID="4112a7f1-40a4-4a17-890d-7f9c48ea547f" containerID="57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec" exitCode=0 Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.063013 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" event={"ID":"4112a7f1-40a4-4a17-890d-7f9c48ea547f","Type":"ContainerDied","Data":"57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec"} Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.063044 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" event={"ID":"4112a7f1-40a4-4a17-890d-7f9c48ea547f","Type":"ContainerDied","Data":"983d3867e36adeb78e25c6b0e3fd8ca74fcef6af9704c7c75c91ad87ce01cf30"} Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.063057 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4rnz8" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.063062 4728 scope.go:117] "RemoveContainer" containerID="57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.088740 4728 scope.go:117] "RemoveContainer" containerID="33f393d1273d7e9ad3b84c7336d1417e9a4e8fad756f35bfdbf7e0c12bb05037" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.090379 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" podStartSLOduration=3.090364256 podStartE2EDuration="3.090364256s" podCreationTimestamp="2026-02-04 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:45:03.084864045 +0000 UTC m=+1052.227568440" watchObservedRunningTime="2026-02-04 11:45:03.090364256 +0000 UTC m=+1052.233068641" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.106794 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4rnz8"] Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.111502 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4rnz8"] Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.120981 4728 scope.go:117] "RemoveContainer" containerID="57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec" Feb 04 11:45:03 crc kubenswrapper[4728]: E0204 11:45:03.122780 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec\": container with ID starting with 57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec not found: ID does not exist" containerID="57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.122825 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec"} err="failed to get container status \"57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec\": rpc error: code = NotFound desc = could not find container \"57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec\": container with ID starting with 57d8382e864c90a834597eefdbc607c72b86d57b5541eebadbf44315e0c540ec not found: ID does not exist" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.122852 4728 scope.go:117] "RemoveContainer" containerID="33f393d1273d7e9ad3b84c7336d1417e9a4e8fad756f35bfdbf7e0c12bb05037" Feb 04 11:45:03 crc kubenswrapper[4728]: E0204 11:45:03.123073 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f393d1273d7e9ad3b84c7336d1417e9a4e8fad756f35bfdbf7e0c12bb05037\": container with ID starting with 33f393d1273d7e9ad3b84c7336d1417e9a4e8fad756f35bfdbf7e0c12bb05037 not found: ID does not exist" containerID="33f393d1273d7e9ad3b84c7336d1417e9a4e8fad756f35bfdbf7e0c12bb05037" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.123102 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f393d1273d7e9ad3b84c7336d1417e9a4e8fad756f35bfdbf7e0c12bb05037"} err="failed to get container status \"33f393d1273d7e9ad3b84c7336d1417e9a4e8fad756f35bfdbf7e0c12bb05037\": rpc error: code = NotFound desc = could not find container \"33f393d1273d7e9ad3b84c7336d1417e9a4e8fad756f35bfdbf7e0c12bb05037\": container with ID starting with 33f393d1273d7e9ad3b84c7336d1417e9a4e8fad756f35bfdbf7e0c12bb05037 not found: ID does not exist" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.380954 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.469438 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hccql\" (UniqueName: \"kubernetes.io/projected/bf7aba07-22c7-451d-840c-92f8899544cd-kube-api-access-hccql\") pod \"bf7aba07-22c7-451d-840c-92f8899544cd\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.469531 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7aba07-22c7-451d-840c-92f8899544cd-config-volume\") pod \"bf7aba07-22c7-451d-840c-92f8899544cd\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.469613 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf7aba07-22c7-451d-840c-92f8899544cd-secret-volume\") pod \"bf7aba07-22c7-451d-840c-92f8899544cd\" (UID: \"bf7aba07-22c7-451d-840c-92f8899544cd\") " Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.470340 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7aba07-22c7-451d-840c-92f8899544cd-config-volume" (OuterVolumeSpecName: "config-volume") pod "bf7aba07-22c7-451d-840c-92f8899544cd" (UID: "bf7aba07-22c7-451d-840c-92f8899544cd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.474138 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7aba07-22c7-451d-840c-92f8899544cd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bf7aba07-22c7-451d-840c-92f8899544cd" (UID: "bf7aba07-22c7-451d-840c-92f8899544cd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.474252 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7aba07-22c7-451d-840c-92f8899544cd-kube-api-access-hccql" (OuterVolumeSpecName: "kube-api-access-hccql") pod "bf7aba07-22c7-451d-840c-92f8899544cd" (UID: "bf7aba07-22c7-451d-840c-92f8899544cd"). InnerVolumeSpecName "kube-api-access-hccql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.562191 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4112a7f1-40a4-4a17-890d-7f9c48ea547f" path="/var/lib/kubelet/pods/4112a7f1-40a4-4a17-890d-7f9c48ea547f/volumes" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.571230 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hccql\" (UniqueName: \"kubernetes.io/projected/bf7aba07-22c7-451d-840c-92f8899544cd-kube-api-access-hccql\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.571440 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7aba07-22c7-451d-840c-92f8899544cd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:03 crc kubenswrapper[4728]: I0204 11:45:03.571577 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf7aba07-22c7-451d-840c-92f8899544cd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.061589 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.071406 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" event={"ID":"bf7aba07-22c7-451d-840c-92f8899544cd","Type":"ContainerDied","Data":"2c177935ebc59242dc66acbabe21508027359e19228859c419bc0af61bc450c1"} Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.071441 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c177935ebc59242dc66acbabe21508027359e19228859c419bc0af61bc450c1" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.071464 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.096409 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.111574 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.185975 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.223463 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.410675 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 04 11:45:04 crc kubenswrapper[4728]: E0204 11:45:04.411063 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7aba07-22c7-451d-840c-92f8899544cd" containerName="collect-profiles" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.411090 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7aba07-22c7-451d-840c-92f8899544cd" containerName="collect-profiles" Feb 04 11:45:04 crc kubenswrapper[4728]: E0204 11:45:04.411120 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4112a7f1-40a4-4a17-890d-7f9c48ea547f" containerName="dnsmasq-dns" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.411129 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4112a7f1-40a4-4a17-890d-7f9c48ea547f" containerName="dnsmasq-dns" Feb 04 11:45:04 crc kubenswrapper[4728]: E0204 11:45:04.411141 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4112a7f1-40a4-4a17-890d-7f9c48ea547f" containerName="init" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.411149 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4112a7f1-40a4-4a17-890d-7f9c48ea547f" containerName="init" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.411342 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4112a7f1-40a4-4a17-890d-7f9c48ea547f" containerName="dnsmasq-dns" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.411363 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7aba07-22c7-451d-840c-92f8899544cd" containerName="collect-profiles" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.412344 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.417600 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.417910 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-z6pqk" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.417915 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.418094 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.429307 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.491707 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2fbe12-58b0-4438-a274-68040a4ec197-config\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.492091 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksc2n\" (UniqueName: \"kubernetes.io/projected/7e2fbe12-58b0-4438-a274-68040a4ec197-kube-api-access-ksc2n\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.492251 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2fbe12-58b0-4438-a274-68040a4ec197-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.492348 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e2fbe12-58b0-4438-a274-68040a4ec197-scripts\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.492443 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e2fbe12-58b0-4438-a274-68040a4ec197-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.492553 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2fbe12-58b0-4438-a274-68040a4ec197-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.492638 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2fbe12-58b0-4438-a274-68040a4ec197-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.594365 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksc2n\" (UniqueName: \"kubernetes.io/projected/7e2fbe12-58b0-4438-a274-68040a4ec197-kube-api-access-ksc2n\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.594694 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2fbe12-58b0-4438-a274-68040a4ec197-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.594872 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e2fbe12-58b0-4438-a274-68040a4ec197-scripts\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.595003 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e2fbe12-58b0-4438-a274-68040a4ec197-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.595508 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e2fbe12-58b0-4438-a274-68040a4ec197-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.595720 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2fbe12-58b0-4438-a274-68040a4ec197-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.596338 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2fbe12-58b0-4438-a274-68040a4ec197-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.596497 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2fbe12-58b0-4438-a274-68040a4ec197-config\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.595882 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e2fbe12-58b0-4438-a274-68040a4ec197-scripts\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.597561 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2fbe12-58b0-4438-a274-68040a4ec197-config\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.600249 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2fbe12-58b0-4438-a274-68040a4ec197-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.600313 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2fbe12-58b0-4438-a274-68040a4ec197-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.608782 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2fbe12-58b0-4438-a274-68040a4ec197-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.611350 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksc2n\" (UniqueName: \"kubernetes.io/projected/7e2fbe12-58b0-4438-a274-68040a4ec197-kube-api-access-ksc2n\") pod \"ovn-northd-0\" (UID: \"7e2fbe12-58b0-4438-a274-68040a4ec197\") " pod="openstack/ovn-northd-0" Feb 04 11:45:04 crc kubenswrapper[4728]: I0204 11:45:04.727828 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.171730 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.254557 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g64bg"] Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.254826 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" podUID="6aa9cce2-7b79-4637-b5bf-9e126d7ec603" containerName="dnsmasq-dns" containerID="cri-o://0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a" gracePeriod=10 Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.287281 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-hvgvk"] Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.288457 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.304627 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hvgvk"] Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.305911 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.407653 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-dns-svc\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.407738 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.407824 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qljkl\" (UniqueName: \"kubernetes.io/projected/dead90ba-0731-47f7-9252-45d8ddf2dd5a-kube-api-access-qljkl\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.407900 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.407935 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-config\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.509705 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.509784 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-config\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.509820 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-dns-svc\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.509867 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.509884 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qljkl\" (UniqueName: \"kubernetes.io/projected/dead90ba-0731-47f7-9252-45d8ddf2dd5a-kube-api-access-qljkl\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.510658 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.510655 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-config\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.510863 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-dns-svc\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.511300 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.536288 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qljkl\" (UniqueName: \"kubernetes.io/projected/dead90ba-0731-47f7-9252-45d8ddf2dd5a-kube-api-access-qljkl\") pod \"dnsmasq-dns-698758b865-hvgvk\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.606287 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.716981 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.800901 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.858951 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.915091 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-nb\") pod \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.915170 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zttcb\" (UniqueName: \"kubernetes.io/projected/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-kube-api-access-zttcb\") pod \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.915232 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-config\") pod \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.915273 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-sb\") pod \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.915321 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-dns-svc\") pod \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\" (UID: \"6aa9cce2-7b79-4637-b5bf-9e126d7ec603\") " Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.936143 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-kube-api-access-zttcb" (OuterVolumeSpecName: "kube-api-access-zttcb") pod "6aa9cce2-7b79-4637-b5bf-9e126d7ec603" (UID: "6aa9cce2-7b79-4637-b5bf-9e126d7ec603"). InnerVolumeSpecName "kube-api-access-zttcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.960013 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-config" (OuterVolumeSpecName: "config") pod "6aa9cce2-7b79-4637-b5bf-9e126d7ec603" (UID: "6aa9cce2-7b79-4637-b5bf-9e126d7ec603"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.969620 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6aa9cce2-7b79-4637-b5bf-9e126d7ec603" (UID: "6aa9cce2-7b79-4637-b5bf-9e126d7ec603"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.979877 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6aa9cce2-7b79-4637-b5bf-9e126d7ec603" (UID: "6aa9cce2-7b79-4637-b5bf-9e126d7ec603"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:05 crc kubenswrapper[4728]: I0204 11:45:05.995468 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6aa9cce2-7b79-4637-b5bf-9e126d7ec603" (UID: "6aa9cce2-7b79-4637-b5bf-9e126d7ec603"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.017092 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.017128 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zttcb\" (UniqueName: \"kubernetes.io/projected/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-kube-api-access-zttcb\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.017144 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.017155 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.017166 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa9cce2-7b79-4637-b5bf-9e126d7ec603-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.096963 4728 generic.go:334] "Generic (PLEG): container finished" podID="6aa9cce2-7b79-4637-b5bf-9e126d7ec603" containerID="0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a" exitCode=0 Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.097374 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.097743 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" event={"ID":"6aa9cce2-7b79-4637-b5bf-9e126d7ec603","Type":"ContainerDied","Data":"0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a"} Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.097819 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-g64bg" event={"ID":"6aa9cce2-7b79-4637-b5bf-9e126d7ec603","Type":"ContainerDied","Data":"216db3d323f4926c5d06bdaeca842254658e7aae30b1b7cb2dbc30dbdee5ad40"} Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.097840 4728 scope.go:117] "RemoveContainer" containerID="0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.097862 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hvgvk"] Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.099829 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7e2fbe12-58b0-4438-a274-68040a4ec197","Type":"ContainerStarted","Data":"43e1d66401a72ac6aaa11b6995c1649019b89887b6f90d18e8ddacf7e029d563"} Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.130769 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g64bg"] Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.139479 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g64bg"] Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.300645 4728 scope.go:117] "RemoveContainer" containerID="81bc037ba8ca343055616b8539055270d205d0942ddaf154671b915ba560f7b6" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.339236 4728 scope.go:117] "RemoveContainer" containerID="0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a" Feb 04 11:45:06 crc kubenswrapper[4728]: E0204 11:45:06.339618 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a\": container with ID starting with 0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a not found: ID does not exist" containerID="0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.339656 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a"} err="failed to get container status \"0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a\": rpc error: code = NotFound desc = could not find container \"0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a\": container with ID starting with 0cc196c9c68a414590cb293b3f0d1e8af95eb8426300513a765db766758cd18a not found: ID does not exist" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.339682 4728 scope.go:117] "RemoveContainer" containerID="81bc037ba8ca343055616b8539055270d205d0942ddaf154671b915ba560f7b6" Feb 04 11:45:06 crc kubenswrapper[4728]: E0204 11:45:06.340074 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81bc037ba8ca343055616b8539055270d205d0942ddaf154671b915ba560f7b6\": container with ID starting with 81bc037ba8ca343055616b8539055270d205d0942ddaf154671b915ba560f7b6 not found: ID does not exist" containerID="81bc037ba8ca343055616b8539055270d205d0942ddaf154671b915ba560f7b6" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.340114 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81bc037ba8ca343055616b8539055270d205d0942ddaf154671b915ba560f7b6"} err="failed to get container status \"81bc037ba8ca343055616b8539055270d205d0942ddaf154671b915ba560f7b6\": rpc error: code = NotFound desc = could not find container \"81bc037ba8ca343055616b8539055270d205d0942ddaf154671b915ba560f7b6\": container with ID starting with 81bc037ba8ca343055616b8539055270d205d0942ddaf154671b915ba560f7b6 not found: ID does not exist" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.465597 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 04 11:45:06 crc kubenswrapper[4728]: E0204 11:45:06.467897 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa9cce2-7b79-4637-b5bf-9e126d7ec603" containerName="init" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.467936 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa9cce2-7b79-4637-b5bf-9e126d7ec603" containerName="init" Feb 04 11:45:06 crc kubenswrapper[4728]: E0204 11:45:06.468142 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa9cce2-7b79-4637-b5bf-9e126d7ec603" containerName="dnsmasq-dns" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.468164 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa9cce2-7b79-4637-b5bf-9e126d7ec603" containerName="dnsmasq-dns" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.468848 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa9cce2-7b79-4637-b5bf-9e126d7ec603" containerName="dnsmasq-dns" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.479523 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.483546 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.483591 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.483783 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.484388 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-swqrx" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.491625 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.530349 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea4f2286-1f91-46b5-98af-0ca776207d16-lock\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.530414 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgkf\" (UniqueName: \"kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-kube-api-access-dzgkf\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.530452 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea4f2286-1f91-46b5-98af-0ca776207d16-cache\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.530502 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.530558 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4f2286-1f91-46b5-98af-0ca776207d16-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.530627 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.631585 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea4f2286-1f91-46b5-98af-0ca776207d16-lock\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.631917 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgkf\" (UniqueName: \"kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-kube-api-access-dzgkf\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.631943 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea4f2286-1f91-46b5-98af-0ca776207d16-cache\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.631972 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.632005 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4f2286-1f91-46b5-98af-0ca776207d16-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.632050 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.632136 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea4f2286-1f91-46b5-98af-0ca776207d16-lock\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: E0204 11:45:06.632210 4728 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 04 11:45:06 crc kubenswrapper[4728]: E0204 11:45:06.632225 4728 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 04 11:45:06 crc kubenswrapper[4728]: E0204 11:45:06.632273 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift podName:ea4f2286-1f91-46b5-98af-0ca776207d16 nodeName:}" failed. No retries permitted until 2026-02-04 11:45:07.132258312 +0000 UTC m=+1056.274962697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift") pod "swift-storage-0" (UID: "ea4f2286-1f91-46b5-98af-0ca776207d16") : configmap "swift-ring-files" not found Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.632435 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.635849 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4f2286-1f91-46b5-98af-0ca776207d16-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.637035 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea4f2286-1f91-46b5-98af-0ca776207d16-cache\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.651718 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgkf\" (UniqueName: \"kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-kube-api-access-dzgkf\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:06 crc kubenswrapper[4728]: I0204 11:45:06.656108 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.049144 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-kjwnv"] Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.052110 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.062826 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.063165 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.063657 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.079110 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kjwnv"] Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.113010 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7e2fbe12-58b0-4438-a274-68040a4ec197","Type":"ContainerStarted","Data":"a5dfa52794004a4c0ca942bf8693a286c9a2daabcc0b58548a0b8d414b11f6ff"} Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.113092 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7e2fbe12-58b0-4438-a274-68040a4ec197","Type":"ContainerStarted","Data":"4f52aa2dab6668b2cfbfbeefdbff1f5b388335fc3fcbc6a251d0e7cfb5fa7385"} Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.113426 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.116550 4728 generic.go:334] "Generic (PLEG): container finished" podID="dead90ba-0731-47f7-9252-45d8ddf2dd5a" containerID="033f13c151d674a6dc88862bc77c1f88ce3e55701e2c2eafadf46fae428bf480" exitCode=0 Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.116581 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hvgvk" event={"ID":"dead90ba-0731-47f7-9252-45d8ddf2dd5a","Type":"ContainerDied","Data":"033f13c151d674a6dc88862bc77c1f88ce3e55701e2c2eafadf46fae428bf480"} Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.116600 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hvgvk" event={"ID":"dead90ba-0731-47f7-9252-45d8ddf2dd5a","Type":"ContainerStarted","Data":"5c2b1eff102ad7c7bd8dd03d1871118fed3ebf5fe38844fc6f3f54239f63bc85"} Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.139829 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:07 crc kubenswrapper[4728]: E0204 11:45:07.140025 4728 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 04 11:45:07 crc kubenswrapper[4728]: E0204 11:45:07.140053 4728 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 04 11:45:07 crc kubenswrapper[4728]: E0204 11:45:07.140114 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift podName:ea4f2286-1f91-46b5-98af-0ca776207d16 nodeName:}" failed. No retries permitted until 2026-02-04 11:45:08.140094323 +0000 UTC m=+1057.282798728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift") pod "swift-storage-0" (UID: "ea4f2286-1f91-46b5-98af-0ca776207d16") : configmap "swift-ring-files" not found Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.167122 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.009040007 podStartE2EDuration="3.167103206s" podCreationTimestamp="2026-02-04 11:45:04 +0000 UTC" firstStartedPulling="2026-02-04 11:45:05.182070988 +0000 UTC m=+1054.324775383" lastFinishedPulling="2026-02-04 11:45:06.340134197 +0000 UTC m=+1055.482838582" observedRunningTime="2026-02-04 11:45:07.13739789 +0000 UTC m=+1056.280102325" watchObservedRunningTime="2026-02-04 11:45:07.167103206 +0000 UTC m=+1056.309807591" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.243135 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7bc\" (UniqueName: \"kubernetes.io/projected/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-kube-api-access-vl7bc\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.243254 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-etc-swift\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.243276 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-ring-data-devices\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.243351 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-scripts\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.243372 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-combined-ca-bundle\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.243457 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-swiftconf\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.243977 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-dispersionconf\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.345910 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl7bc\" (UniqueName: \"kubernetes.io/projected/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-kube-api-access-vl7bc\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.346260 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-etc-swift\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.346285 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-ring-data-devices\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.346334 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-scripts\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.346354 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-combined-ca-bundle\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.346380 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-swiftconf\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.346430 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-dispersionconf\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.347123 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-scripts\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.347179 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-etc-swift\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.347356 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-ring-data-devices\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.351024 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-dispersionconf\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.351246 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-combined-ca-bundle\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.351740 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-swiftconf\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.367791 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl7bc\" (UniqueName: \"kubernetes.io/projected/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-kube-api-access-vl7bc\") pod \"swift-ring-rebalance-kjwnv\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.379351 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.592343 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa9cce2-7b79-4637-b5bf-9e126d7ec603" path="/var/lib/kubelet/pods/6aa9cce2-7b79-4637-b5bf-9e126d7ec603/volumes" Feb 04 11:45:07 crc kubenswrapper[4728]: I0204 11:45:07.880535 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kjwnv"] Feb 04 11:45:07 crc kubenswrapper[4728]: W0204 11:45:07.881951 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc0cb1f_508e_4ac0_b653_aeb03317bdd7.slice/crio-28b8ce6181236959a792fe60d4233d0ad7dd254fea51590b338912e10d10bae9 WatchSource:0}: Error finding container 28b8ce6181236959a792fe60d4233d0ad7dd254fea51590b338912e10d10bae9: Status 404 returned error can't find the container with id 28b8ce6181236959a792fe60d4233d0ad7dd254fea51590b338912e10d10bae9 Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.125676 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hvgvk" event={"ID":"dead90ba-0731-47f7-9252-45d8ddf2dd5a","Type":"ContainerStarted","Data":"ae1849fd6482bb0d8fb0c3ff1aaeba1d059d16bcb3b6febe3aa178beb91f0bc7"} Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.126059 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.128412 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kjwnv" event={"ID":"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7","Type":"ContainerStarted","Data":"28b8ce6181236959a792fe60d4233d0ad7dd254fea51590b338912e10d10bae9"} Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.146075 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-hvgvk" podStartSLOduration=3.146060917 podStartE2EDuration="3.146060917s" podCreationTimestamp="2026-02-04 11:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:45:08.144624813 +0000 UTC m=+1057.287329218" watchObservedRunningTime="2026-02-04 11:45:08.146060917 +0000 UTC m=+1057.288765302" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.159267 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:08 crc kubenswrapper[4728]: E0204 11:45:08.159657 4728 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 04 11:45:08 crc kubenswrapper[4728]: E0204 11:45:08.159684 4728 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 04 11:45:08 crc kubenswrapper[4728]: E0204 11:45:08.159801 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift podName:ea4f2286-1f91-46b5-98af-0ca776207d16 nodeName:}" failed. No retries permitted until 2026-02-04 11:45:10.159773233 +0000 UTC m=+1059.302477688 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift") pod "swift-storage-0" (UID: "ea4f2286-1f91-46b5-98af-0ca776207d16") : configmap "swift-ring-files" not found Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.598009 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cn98m"] Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.599510 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cn98m" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.610357 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cn98m"] Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.649416 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0548-account-create-update-66scr"] Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.650427 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0548-account-create-update-66scr" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.654603 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.668964 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9v25\" (UniqueName: \"kubernetes.io/projected/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-kube-api-access-h9v25\") pod \"glance-0548-account-create-update-66scr\" (UID: \"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3\") " pod="openstack/glance-0548-account-create-update-66scr" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.669118 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0548-account-create-update-66scr"] Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.669137 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a684128-2a85-49af-857f-3d37de311853-operator-scripts\") pod \"glance-db-create-cn98m\" (UID: \"5a684128-2a85-49af-857f-3d37de311853\") " pod="openstack/glance-db-create-cn98m" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.669168 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-operator-scripts\") pod \"glance-0548-account-create-update-66scr\" (UID: \"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3\") " pod="openstack/glance-0548-account-create-update-66scr" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.669308 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp64m\" (UniqueName: \"kubernetes.io/projected/5a684128-2a85-49af-857f-3d37de311853-kube-api-access-qp64m\") pod \"glance-db-create-cn98m\" (UID: \"5a684128-2a85-49af-857f-3d37de311853\") " pod="openstack/glance-db-create-cn98m" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.770904 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a684128-2a85-49af-857f-3d37de311853-operator-scripts\") pod \"glance-db-create-cn98m\" (UID: \"5a684128-2a85-49af-857f-3d37de311853\") " pod="openstack/glance-db-create-cn98m" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.770971 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-operator-scripts\") pod \"glance-0548-account-create-update-66scr\" (UID: \"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3\") " pod="openstack/glance-0548-account-create-update-66scr" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.771055 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp64m\" (UniqueName: \"kubernetes.io/projected/5a684128-2a85-49af-857f-3d37de311853-kube-api-access-qp64m\") pod \"glance-db-create-cn98m\" (UID: \"5a684128-2a85-49af-857f-3d37de311853\") " pod="openstack/glance-db-create-cn98m" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.771119 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9v25\" (UniqueName: \"kubernetes.io/projected/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-kube-api-access-h9v25\") pod \"glance-0548-account-create-update-66scr\" (UID: \"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3\") " pod="openstack/glance-0548-account-create-update-66scr" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.771978 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-operator-scripts\") pod \"glance-0548-account-create-update-66scr\" (UID: \"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3\") " pod="openstack/glance-0548-account-create-update-66scr" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.772182 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a684128-2a85-49af-857f-3d37de311853-operator-scripts\") pod \"glance-db-create-cn98m\" (UID: \"5a684128-2a85-49af-857f-3d37de311853\") " pod="openstack/glance-db-create-cn98m" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.791419 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9v25\" (UniqueName: \"kubernetes.io/projected/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-kube-api-access-h9v25\") pod \"glance-0548-account-create-update-66scr\" (UID: \"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3\") " pod="openstack/glance-0548-account-create-update-66scr" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.792559 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp64m\" (UniqueName: \"kubernetes.io/projected/5a684128-2a85-49af-857f-3d37de311853-kube-api-access-qp64m\") pod \"glance-db-create-cn98m\" (UID: \"5a684128-2a85-49af-857f-3d37de311853\") " pod="openstack/glance-db-create-cn98m" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.923249 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cn98m" Feb 04 11:45:08 crc kubenswrapper[4728]: I0204 11:45:08.974977 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0548-account-create-update-66scr" Feb 04 11:45:09 crc kubenswrapper[4728]: I0204 11:45:09.412072 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cn98m"] Feb 04 11:45:09 crc kubenswrapper[4728]: W0204 11:45:09.417425 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a684128_2a85_49af_857f_3d37de311853.slice/crio-35236a74258ac59328e93385ce57a41c7f30191e754802e70cde5d4471e9de30 WatchSource:0}: Error finding container 35236a74258ac59328e93385ce57a41c7f30191e754802e70cde5d4471e9de30: Status 404 returned error can't find the container with id 35236a74258ac59328e93385ce57a41c7f30191e754802e70cde5d4471e9de30 Feb 04 11:45:09 crc kubenswrapper[4728]: I0204 11:45:09.529863 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0548-account-create-update-66scr"] Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.144838 4728 generic.go:334] "Generic (PLEG): container finished" podID="5a684128-2a85-49af-857f-3d37de311853" containerID="d2a70547838fe1f358d66306c3fb124fbb8dab681755c1c77f57f252559ab0c8" exitCode=0 Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.144899 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cn98m" event={"ID":"5a684128-2a85-49af-857f-3d37de311853","Type":"ContainerDied","Data":"d2a70547838fe1f358d66306c3fb124fbb8dab681755c1c77f57f252559ab0c8"} Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.144952 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cn98m" event={"ID":"5a684128-2a85-49af-857f-3d37de311853","Type":"ContainerStarted","Data":"35236a74258ac59328e93385ce57a41c7f30191e754802e70cde5d4471e9de30"} Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.193397 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:10 crc kubenswrapper[4728]: E0204 11:45:10.193687 4728 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 04 11:45:10 crc kubenswrapper[4728]: E0204 11:45:10.193705 4728 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 04 11:45:10 crc kubenswrapper[4728]: E0204 11:45:10.193786 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift podName:ea4f2286-1f91-46b5-98af-0ca776207d16 nodeName:}" failed. No retries permitted until 2026-02-04 11:45:14.193766744 +0000 UTC m=+1063.336471149 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift") pod "swift-storage-0" (UID: "ea4f2286-1f91-46b5-98af-0ca776207d16") : configmap "swift-ring-files" not found Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.221070 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-74nts"] Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.222039 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-74nts" Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.223906 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.239890 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-74nts"] Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.396705 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-operator-scripts\") pod \"root-account-create-update-74nts\" (UID: \"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3\") " pod="openstack/root-account-create-update-74nts" Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.396871 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkpwm\" (UniqueName: \"kubernetes.io/projected/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-kube-api-access-bkpwm\") pod \"root-account-create-update-74nts\" (UID: \"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3\") " pod="openstack/root-account-create-update-74nts" Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.498866 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkpwm\" (UniqueName: \"kubernetes.io/projected/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-kube-api-access-bkpwm\") pod \"root-account-create-update-74nts\" (UID: \"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3\") " pod="openstack/root-account-create-update-74nts" Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.498960 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-operator-scripts\") pod \"root-account-create-update-74nts\" (UID: \"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3\") " pod="openstack/root-account-create-update-74nts" Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.499993 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-operator-scripts\") pod \"root-account-create-update-74nts\" (UID: \"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3\") " pod="openstack/root-account-create-update-74nts" Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.538065 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkpwm\" (UniqueName: \"kubernetes.io/projected/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-kube-api-access-bkpwm\") pod \"root-account-create-update-74nts\" (UID: \"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3\") " pod="openstack/root-account-create-update-74nts" Feb 04 11:45:10 crc kubenswrapper[4728]: I0204 11:45:10.541636 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-74nts" Feb 04 11:45:11 crc kubenswrapper[4728]: W0204 11:45:11.963012 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod956f1cf3_2478_4ec6_9b1c_04e15b5f3ee3.slice/crio-ffcbe7a922044ca84737d8ae4c3403d2012ccd3cac843be662207f3828e9b399 WatchSource:0}: Error finding container ffcbe7a922044ca84737d8ae4c3403d2012ccd3cac843be662207f3828e9b399: Status 404 returned error can't find the container with id ffcbe7a922044ca84737d8ae4c3403d2012ccd3cac843be662207f3828e9b399 Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.129774 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cn98m" Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.165130 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0548-account-create-update-66scr" event={"ID":"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3","Type":"ContainerStarted","Data":"ffcbe7a922044ca84737d8ae4c3403d2012ccd3cac843be662207f3828e9b399"} Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.168072 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cn98m" event={"ID":"5a684128-2a85-49af-857f-3d37de311853","Type":"ContainerDied","Data":"35236a74258ac59328e93385ce57a41c7f30191e754802e70cde5d4471e9de30"} Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.168103 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35236a74258ac59328e93385ce57a41c7f30191e754802e70cde5d4471e9de30" Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.168140 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cn98m" Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.230065 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a684128-2a85-49af-857f-3d37de311853-operator-scripts\") pod \"5a684128-2a85-49af-857f-3d37de311853\" (UID: \"5a684128-2a85-49af-857f-3d37de311853\") " Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.230204 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp64m\" (UniqueName: \"kubernetes.io/projected/5a684128-2a85-49af-857f-3d37de311853-kube-api-access-qp64m\") pod \"5a684128-2a85-49af-857f-3d37de311853\" (UID: \"5a684128-2a85-49af-857f-3d37de311853\") " Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.230785 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a684128-2a85-49af-857f-3d37de311853-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a684128-2a85-49af-857f-3d37de311853" (UID: "5a684128-2a85-49af-857f-3d37de311853"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.247877 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a684128-2a85-49af-857f-3d37de311853-kube-api-access-qp64m" (OuterVolumeSpecName: "kube-api-access-qp64m") pod "5a684128-2a85-49af-857f-3d37de311853" (UID: "5a684128-2a85-49af-857f-3d37de311853"). InnerVolumeSpecName "kube-api-access-qp64m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.331916 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a684128-2a85-49af-857f-3d37de311853-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.331946 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp64m\" (UniqueName: \"kubernetes.io/projected/5a684128-2a85-49af-857f-3d37de311853-kube-api-access-qp64m\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.367565 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-74nts"] Feb 04 11:45:12 crc kubenswrapper[4728]: W0204 11:45:12.373541 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeb11fca_b60e_4b6d_91e2_dc1f4ca1ccc3.slice/crio-d896491310d5125a4770c6f5f32594d7b8bd7b244cdf352a77058d991681eff2 WatchSource:0}: Error finding container d896491310d5125a4770c6f5f32594d7b8bd7b244cdf352a77058d991681eff2: Status 404 returned error can't find the container with id d896491310d5125a4770c6f5f32594d7b8bd7b244cdf352a77058d991681eff2 Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.920747 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-2gkww"] Feb 04 11:45:12 crc kubenswrapper[4728]: E0204 11:45:12.921277 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a684128-2a85-49af-857f-3d37de311853" containerName="mariadb-database-create" Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.921293 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a684128-2a85-49af-857f-3d37de311853" containerName="mariadb-database-create" Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.921452 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a684128-2a85-49af-857f-3d37de311853" containerName="mariadb-database-create" Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.921940 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2gkww" Feb 04 11:45:12 crc kubenswrapper[4728]: I0204 11:45:12.928851 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2gkww"] Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.033609 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ec4e-account-create-update-l68lt"] Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.034557 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ec4e-account-create-update-l68lt" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.038396 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.044361 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ec4e-account-create-update-l68lt"] Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.045818 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f5q2\" (UniqueName: \"kubernetes.io/projected/beac9f4f-a615-4244-9ba8-ded8ce531f3b-kube-api-access-9f5q2\") pod \"keystone-db-create-2gkww\" (UID: \"beac9f4f-a615-4244-9ba8-ded8ce531f3b\") " pod="openstack/keystone-db-create-2gkww" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.045855 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/beac9f4f-a615-4244-9ba8-ded8ce531f3b-operator-scripts\") pod \"keystone-db-create-2gkww\" (UID: \"beac9f4f-a615-4244-9ba8-ded8ce531f3b\") " pod="openstack/keystone-db-create-2gkww" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.146911 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e18986c-826b-4478-a01a-29fcce1f946f-operator-scripts\") pod \"keystone-ec4e-account-create-update-l68lt\" (UID: \"6e18986c-826b-4478-a01a-29fcce1f946f\") " pod="openstack/keystone-ec4e-account-create-update-l68lt" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.147012 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tls8f\" (UniqueName: \"kubernetes.io/projected/6e18986c-826b-4478-a01a-29fcce1f946f-kube-api-access-tls8f\") pod \"keystone-ec4e-account-create-update-l68lt\" (UID: \"6e18986c-826b-4478-a01a-29fcce1f946f\") " pod="openstack/keystone-ec4e-account-create-update-l68lt" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.147086 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f5q2\" (UniqueName: \"kubernetes.io/projected/beac9f4f-a615-4244-9ba8-ded8ce531f3b-kube-api-access-9f5q2\") pod \"keystone-db-create-2gkww\" (UID: \"beac9f4f-a615-4244-9ba8-ded8ce531f3b\") " pod="openstack/keystone-db-create-2gkww" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.147109 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/beac9f4f-a615-4244-9ba8-ded8ce531f3b-operator-scripts\") pod \"keystone-db-create-2gkww\" (UID: \"beac9f4f-a615-4244-9ba8-ded8ce531f3b\") " pod="openstack/keystone-db-create-2gkww" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.147924 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/beac9f4f-a615-4244-9ba8-ded8ce531f3b-operator-scripts\") pod \"keystone-db-create-2gkww\" (UID: \"beac9f4f-a615-4244-9ba8-ded8ce531f3b\") " pod="openstack/keystone-db-create-2gkww" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.171992 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f5q2\" (UniqueName: \"kubernetes.io/projected/beac9f4f-a615-4244-9ba8-ded8ce531f3b-kube-api-access-9f5q2\") pod \"keystone-db-create-2gkww\" (UID: \"beac9f4f-a615-4244-9ba8-ded8ce531f3b\") " pod="openstack/keystone-db-create-2gkww" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.176152 4728 generic.go:334] "Generic (PLEG): container finished" podID="956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3" containerID="36ddc99f8878083387d1e77b4af7e9c4113d1170b93f42016aea83bfb25d53ba" exitCode=0 Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.176220 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0548-account-create-update-66scr" event={"ID":"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3","Type":"ContainerDied","Data":"36ddc99f8878083387d1e77b4af7e9c4113d1170b93f42016aea83bfb25d53ba"} Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.177895 4728 generic.go:334] "Generic (PLEG): container finished" podID="feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3" containerID="19f8fb769c037cda88b380f934d9f29ab566e04fcb53660976556b6985c58a0f" exitCode=0 Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.177949 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-74nts" event={"ID":"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3","Type":"ContainerDied","Data":"19f8fb769c037cda88b380f934d9f29ab566e04fcb53660976556b6985c58a0f"} Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.177971 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-74nts" event={"ID":"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3","Type":"ContainerStarted","Data":"d896491310d5125a4770c6f5f32594d7b8bd7b244cdf352a77058d991681eff2"} Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.179834 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kjwnv" event={"ID":"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7","Type":"ContainerStarted","Data":"413afee1831e7ec1f0e2f6cf567ab79fb4a6213d3021e577182e64366d68a8b7"} Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.241368 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-kjwnv" podStartSLOduration=1.835047174 podStartE2EDuration="6.241350069s" podCreationTimestamp="2026-02-04 11:45:07 +0000 UTC" firstStartedPulling="2026-02-04 11:45:07.884508489 +0000 UTC m=+1057.027212874" lastFinishedPulling="2026-02-04 11:45:12.290811384 +0000 UTC m=+1061.433515769" observedRunningTime="2026-02-04 11:45:13.230231765 +0000 UTC m=+1062.372936160" watchObservedRunningTime="2026-02-04 11:45:13.241350069 +0000 UTC m=+1062.384054454" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.243282 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-ktzhs"] Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.244472 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ktzhs" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.248476 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e18986c-826b-4478-a01a-29fcce1f946f-operator-scripts\") pod \"keystone-ec4e-account-create-update-l68lt\" (UID: \"6e18986c-826b-4478-a01a-29fcce1f946f\") " pod="openstack/keystone-ec4e-account-create-update-l68lt" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.248601 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tls8f\" (UniqueName: \"kubernetes.io/projected/6e18986c-826b-4478-a01a-29fcce1f946f-kube-api-access-tls8f\") pod \"keystone-ec4e-account-create-update-l68lt\" (UID: \"6e18986c-826b-4478-a01a-29fcce1f946f\") " pod="openstack/keystone-ec4e-account-create-update-l68lt" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.249340 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e18986c-826b-4478-a01a-29fcce1f946f-operator-scripts\") pod \"keystone-ec4e-account-create-update-l68lt\" (UID: \"6e18986c-826b-4478-a01a-29fcce1f946f\") " pod="openstack/keystone-ec4e-account-create-update-l68lt" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.254657 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ktzhs"] Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.269938 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tls8f\" (UniqueName: \"kubernetes.io/projected/6e18986c-826b-4478-a01a-29fcce1f946f-kube-api-access-tls8f\") pod \"keystone-ec4e-account-create-update-l68lt\" (UID: \"6e18986c-826b-4478-a01a-29fcce1f946f\") " pod="openstack/keystone-ec4e-account-create-update-l68lt" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.303309 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2gkww" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.344466 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9211-account-create-update-bgnxc"] Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.347034 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9211-account-create-update-bgnxc" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.350608 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc936e27-7f59-4b4f-af88-3489bac544c0-operator-scripts\") pod \"placement-db-create-ktzhs\" (UID: \"bc936e27-7f59-4b4f-af88-3489bac544c0\") " pod="openstack/placement-db-create-ktzhs" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.350727 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw2b6\" (UniqueName: \"kubernetes.io/projected/bc936e27-7f59-4b4f-af88-3489bac544c0-kube-api-access-vw2b6\") pod \"placement-db-create-ktzhs\" (UID: \"bc936e27-7f59-4b4f-af88-3489bac544c0\") " pod="openstack/placement-db-create-ktzhs" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.355530 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.359495 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ec4e-account-create-update-l68lt" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.365734 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9211-account-create-update-bgnxc"] Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.459429 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8222\" (UniqueName: \"kubernetes.io/projected/83e29c6f-daef-4720-805c-a5889be741e0-kube-api-access-s8222\") pod \"placement-9211-account-create-update-bgnxc\" (UID: \"83e29c6f-daef-4720-805c-a5889be741e0\") " pod="openstack/placement-9211-account-create-update-bgnxc" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.459785 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc936e27-7f59-4b4f-af88-3489bac544c0-operator-scripts\") pod \"placement-db-create-ktzhs\" (UID: \"bc936e27-7f59-4b4f-af88-3489bac544c0\") " pod="openstack/placement-db-create-ktzhs" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.459855 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e29c6f-daef-4720-805c-a5889be741e0-operator-scripts\") pod \"placement-9211-account-create-update-bgnxc\" (UID: \"83e29c6f-daef-4720-805c-a5889be741e0\") " pod="openstack/placement-9211-account-create-update-bgnxc" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.459920 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw2b6\" (UniqueName: \"kubernetes.io/projected/bc936e27-7f59-4b4f-af88-3489bac544c0-kube-api-access-vw2b6\") pod \"placement-db-create-ktzhs\" (UID: \"bc936e27-7f59-4b4f-af88-3489bac544c0\") " pod="openstack/placement-db-create-ktzhs" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.462912 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc936e27-7f59-4b4f-af88-3489bac544c0-operator-scripts\") pod \"placement-db-create-ktzhs\" (UID: \"bc936e27-7f59-4b4f-af88-3489bac544c0\") " pod="openstack/placement-db-create-ktzhs" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.476455 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw2b6\" (UniqueName: \"kubernetes.io/projected/bc936e27-7f59-4b4f-af88-3489bac544c0-kube-api-access-vw2b6\") pod \"placement-db-create-ktzhs\" (UID: \"bc936e27-7f59-4b4f-af88-3489bac544c0\") " pod="openstack/placement-db-create-ktzhs" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.561612 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8222\" (UniqueName: \"kubernetes.io/projected/83e29c6f-daef-4720-805c-a5889be741e0-kube-api-access-s8222\") pod \"placement-9211-account-create-update-bgnxc\" (UID: \"83e29c6f-daef-4720-805c-a5889be741e0\") " pod="openstack/placement-9211-account-create-update-bgnxc" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.561712 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e29c6f-daef-4720-805c-a5889be741e0-operator-scripts\") pod \"placement-9211-account-create-update-bgnxc\" (UID: \"83e29c6f-daef-4720-805c-a5889be741e0\") " pod="openstack/placement-9211-account-create-update-bgnxc" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.564847 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e29c6f-daef-4720-805c-a5889be741e0-operator-scripts\") pod \"placement-9211-account-create-update-bgnxc\" (UID: \"83e29c6f-daef-4720-805c-a5889be741e0\") " pod="openstack/placement-9211-account-create-update-bgnxc" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.567872 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ktzhs" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.577804 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8222\" (UniqueName: \"kubernetes.io/projected/83e29c6f-daef-4720-805c-a5889be741e0-kube-api-access-s8222\") pod \"placement-9211-account-create-update-bgnxc\" (UID: \"83e29c6f-daef-4720-805c-a5889be741e0\") " pod="openstack/placement-9211-account-create-update-bgnxc" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.707028 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9211-account-create-update-bgnxc" Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.814284 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2gkww"] Feb 04 11:45:13 crc kubenswrapper[4728]: W0204 11:45:13.820358 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeac9f4f_a615_4244_9ba8_ded8ce531f3b.slice/crio-e2e44f668536ef340a745085c514c8d2bb6bd1c3cb37940ca485dfdc9f16d188 WatchSource:0}: Error finding container e2e44f668536ef340a745085c514c8d2bb6bd1c3cb37940ca485dfdc9f16d188: Status 404 returned error can't find the container with id e2e44f668536ef340a745085c514c8d2bb6bd1c3cb37940ca485dfdc9f16d188 Feb 04 11:45:13 crc kubenswrapper[4728]: I0204 11:45:13.907202 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ec4e-account-create-update-l68lt"] Feb 04 11:45:13 crc kubenswrapper[4728]: W0204 11:45:13.910350 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e18986c_826b_4478_a01a_29fcce1f946f.slice/crio-e0486115195724df3ae7e4e9a0139b8ef1ec8233d30dd2d4f1dd4f9de9e24dd2 WatchSource:0}: Error finding container e0486115195724df3ae7e4e9a0139b8ef1ec8233d30dd2d4f1dd4f9de9e24dd2: Status 404 returned error can't find the container with id e0486115195724df3ae7e4e9a0139b8ef1ec8233d30dd2d4f1dd4f9de9e24dd2 Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.017131 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ktzhs"] Feb 04 11:45:14 crc kubenswrapper[4728]: W0204 11:45:14.035059 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc936e27_7f59_4b4f_af88_3489bac544c0.slice/crio-df9bc7b66f076ceb022c453df4aa1ee8284e011323982d7edb063d92d09c5398 WatchSource:0}: Error finding container df9bc7b66f076ceb022c453df4aa1ee8284e011323982d7edb063d92d09c5398: Status 404 returned error can't find the container with id df9bc7b66f076ceb022c453df4aa1ee8284e011323982d7edb063d92d09c5398 Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.164878 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9211-account-create-update-bgnxc"] Feb 04 11:45:14 crc kubenswrapper[4728]: W0204 11:45:14.175904 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83e29c6f_daef_4720_805c_a5889be741e0.slice/crio-6603d1d95cbc9dcd4ad9a11c03c2a3b4865762aba06adb81938de66f1c8e97d5 WatchSource:0}: Error finding container 6603d1d95cbc9dcd4ad9a11c03c2a3b4865762aba06adb81938de66f1c8e97d5: Status 404 returned error can't find the container with id 6603d1d95cbc9dcd4ad9a11c03c2a3b4865762aba06adb81938de66f1c8e97d5 Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.190727 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2gkww" event={"ID":"beac9f4f-a615-4244-9ba8-ded8ce531f3b","Type":"ContainerStarted","Data":"1fb7b2873fa6c0af713444af4006694de4367a209b847fbd2029944d045c62f9"} Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.190792 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2gkww" event={"ID":"beac9f4f-a615-4244-9ba8-ded8ce531f3b","Type":"ContainerStarted","Data":"e2e44f668536ef340a745085c514c8d2bb6bd1c3cb37940ca485dfdc9f16d188"} Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.193188 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ec4e-account-create-update-l68lt" event={"ID":"6e18986c-826b-4478-a01a-29fcce1f946f","Type":"ContainerStarted","Data":"97f108bc7ed74972f167ede74797502d8f80463477c9de97881830a8243c977e"} Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.193253 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ec4e-account-create-update-l68lt" event={"ID":"6e18986c-826b-4478-a01a-29fcce1f946f","Type":"ContainerStarted","Data":"e0486115195724df3ae7e4e9a0139b8ef1ec8233d30dd2d4f1dd4f9de9e24dd2"} Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.200790 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9211-account-create-update-bgnxc" event={"ID":"83e29c6f-daef-4720-805c-a5889be741e0","Type":"ContainerStarted","Data":"6603d1d95cbc9dcd4ad9a11c03c2a3b4865762aba06adb81938de66f1c8e97d5"} Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.215727 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ktzhs" event={"ID":"bc936e27-7f59-4b4f-af88-3489bac544c0","Type":"ContainerStarted","Data":"df9bc7b66f076ceb022c453df4aa1ee8284e011323982d7edb063d92d09c5398"} Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.227013 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-ec4e-account-create-update-l68lt" podStartSLOduration=1.22699 podStartE2EDuration="1.22699s" podCreationTimestamp="2026-02-04 11:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:45:14.225117435 +0000 UTC m=+1063.367821820" watchObservedRunningTime="2026-02-04 11:45:14.22699 +0000 UTC m=+1063.369694375" Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.272041 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:14 crc kubenswrapper[4728]: E0204 11:45:14.272278 4728 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 04 11:45:14 crc kubenswrapper[4728]: E0204 11:45:14.272774 4728 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 04 11:45:14 crc kubenswrapper[4728]: E0204 11:45:14.272853 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift podName:ea4f2286-1f91-46b5-98af-0ca776207d16 nodeName:}" failed. No retries permitted until 2026-02-04 11:45:22.272827739 +0000 UTC m=+1071.415532124 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift") pod "swift-storage-0" (UID: "ea4f2286-1f91-46b5-98af-0ca776207d16") : configmap "swift-ring-files" not found Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.561369 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0548-account-create-update-66scr" Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.685280 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-operator-scripts\") pod \"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3\" (UID: \"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3\") " Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.685351 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9v25\" (UniqueName: \"kubernetes.io/projected/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-kube-api-access-h9v25\") pod \"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3\" (UID: \"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3\") " Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.686462 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3" (UID: "956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.690461 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-kube-api-access-h9v25" (OuterVolumeSpecName: "kube-api-access-h9v25") pod "956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3" (UID: "956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3"). InnerVolumeSpecName "kube-api-access-h9v25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.713610 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-74nts" Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.787126 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.787191 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9v25\" (UniqueName: \"kubernetes.io/projected/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3-kube-api-access-h9v25\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.888373 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-operator-scripts\") pod \"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3\" (UID: \"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3\") " Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.888770 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkpwm\" (UniqueName: \"kubernetes.io/projected/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-kube-api-access-bkpwm\") pod \"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3\" (UID: \"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3\") " Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.889655 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3" (UID: "feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.894229 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-kube-api-access-bkpwm" (OuterVolumeSpecName: "kube-api-access-bkpwm") pod "feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3" (UID: "feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3"). InnerVolumeSpecName "kube-api-access-bkpwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.991164 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkpwm\" (UniqueName: \"kubernetes.io/projected/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-kube-api-access-bkpwm\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:14 crc kubenswrapper[4728]: I0204 11:45:14.991439 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.223277 4728 generic.go:334] "Generic (PLEG): container finished" podID="6e18986c-826b-4478-a01a-29fcce1f946f" containerID="97f108bc7ed74972f167ede74797502d8f80463477c9de97881830a8243c977e" exitCode=0 Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.223346 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ec4e-account-create-update-l68lt" event={"ID":"6e18986c-826b-4478-a01a-29fcce1f946f","Type":"ContainerDied","Data":"97f108bc7ed74972f167ede74797502d8f80463477c9de97881830a8243c977e"} Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.224626 4728 generic.go:334] "Generic (PLEG): container finished" podID="83e29c6f-daef-4720-805c-a5889be741e0" containerID="73ac02af509917ba82c429e8458ce626e04cd6d3c71b3655f16d21bca0c650be" exitCode=0 Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.224679 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9211-account-create-update-bgnxc" event={"ID":"83e29c6f-daef-4720-805c-a5889be741e0","Type":"ContainerDied","Data":"73ac02af509917ba82c429e8458ce626e04cd6d3c71b3655f16d21bca0c650be"} Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.226229 4728 generic.go:334] "Generic (PLEG): container finished" podID="bc936e27-7f59-4b4f-af88-3489bac544c0" containerID="b5a0fc5178591163f1573028d90f5ea14c105fec0915974b0ad82d41f0f1c341" exitCode=0 Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.226284 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ktzhs" event={"ID":"bc936e27-7f59-4b4f-af88-3489bac544c0","Type":"ContainerDied","Data":"b5a0fc5178591163f1573028d90f5ea14c105fec0915974b0ad82d41f0f1c341"} Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.230981 4728 generic.go:334] "Generic (PLEG): container finished" podID="beac9f4f-a615-4244-9ba8-ded8ce531f3b" containerID="1fb7b2873fa6c0af713444af4006694de4367a209b847fbd2029944d045c62f9" exitCode=0 Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.231059 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2gkww" event={"ID":"beac9f4f-a615-4244-9ba8-ded8ce531f3b","Type":"ContainerDied","Data":"1fb7b2873fa6c0af713444af4006694de4367a209b847fbd2029944d045c62f9"} Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.235436 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0548-account-create-update-66scr" event={"ID":"956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3","Type":"ContainerDied","Data":"ffcbe7a922044ca84737d8ae4c3403d2012ccd3cac843be662207f3828e9b399"} Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.235480 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffcbe7a922044ca84737d8ae4c3403d2012ccd3cac843be662207f3828e9b399" Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.235585 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0548-account-create-update-66scr" Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.240103 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-74nts" event={"ID":"feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3","Type":"ContainerDied","Data":"d896491310d5125a4770c6f5f32594d7b8bd7b244cdf352a77058d991681eff2"} Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.240300 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d896491310d5125a4770c6f5f32594d7b8bd7b244cdf352a77058d991681eff2" Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.240462 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-74nts" Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.608871 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.618169 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2gkww" Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.677249 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bms9x"] Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.677538 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" podUID="355651da-79e8-4420-addf-f27c8ec3e9e7" containerName="dnsmasq-dns" containerID="cri-o://4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e" gracePeriod=10 Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.701376 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f5q2\" (UniqueName: \"kubernetes.io/projected/beac9f4f-a615-4244-9ba8-ded8ce531f3b-kube-api-access-9f5q2\") pod \"beac9f4f-a615-4244-9ba8-ded8ce531f3b\" (UID: \"beac9f4f-a615-4244-9ba8-ded8ce531f3b\") " Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.701765 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/beac9f4f-a615-4244-9ba8-ded8ce531f3b-operator-scripts\") pod \"beac9f4f-a615-4244-9ba8-ded8ce531f3b\" (UID: \"beac9f4f-a615-4244-9ba8-ded8ce531f3b\") " Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.702415 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beac9f4f-a615-4244-9ba8-ded8ce531f3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "beac9f4f-a615-4244-9ba8-ded8ce531f3b" (UID: "beac9f4f-a615-4244-9ba8-ded8ce531f3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.705157 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beac9f4f-a615-4244-9ba8-ded8ce531f3b-kube-api-access-9f5q2" (OuterVolumeSpecName: "kube-api-access-9f5q2") pod "beac9f4f-a615-4244-9ba8-ded8ce531f3b" (UID: "beac9f4f-a615-4244-9ba8-ded8ce531f3b"). InnerVolumeSpecName "kube-api-access-9f5q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.803946 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f5q2\" (UniqueName: \"kubernetes.io/projected/beac9f4f-a615-4244-9ba8-ded8ce531f3b-kube-api-access-9f5q2\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:15 crc kubenswrapper[4728]: I0204 11:45:15.804029 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/beac9f4f-a615-4244-9ba8-ded8ce531f3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.145127 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.248167 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2gkww" event={"ID":"beac9f4f-a615-4244-9ba8-ded8ce531f3b","Type":"ContainerDied","Data":"e2e44f668536ef340a745085c514c8d2bb6bd1c3cb37940ca485dfdc9f16d188"} Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.248217 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2e44f668536ef340a745085c514c8d2bb6bd1c3cb37940ca485dfdc9f16d188" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.248184 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2gkww" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.250117 4728 generic.go:334] "Generic (PLEG): container finished" podID="355651da-79e8-4420-addf-f27c8ec3e9e7" containerID="4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e" exitCode=0 Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.250176 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.250222 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" event={"ID":"355651da-79e8-4420-addf-f27c8ec3e9e7","Type":"ContainerDied","Data":"4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e"} Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.250908 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bms9x" event={"ID":"355651da-79e8-4420-addf-f27c8ec3e9e7","Type":"ContainerDied","Data":"a6e9e6a78c9c8f3ad7224d148127aab728a0930bb54bae2dbd7b46ea9c44c356"} Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.250930 4728 scope.go:117] "RemoveContainer" containerID="4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.280568 4728 scope.go:117] "RemoveContainer" containerID="e830bbb7bccc597ae4fa0552d5e3a3839ce40fcd064e55a0b70c8ded169e9213" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.311033 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-ovsdbserver-nb\") pod \"355651da-79e8-4420-addf-f27c8ec3e9e7\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.311104 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-config\") pod \"355651da-79e8-4420-addf-f27c8ec3e9e7\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.311141 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x655w\" (UniqueName: \"kubernetes.io/projected/355651da-79e8-4420-addf-f27c8ec3e9e7-kube-api-access-x655w\") pod \"355651da-79e8-4420-addf-f27c8ec3e9e7\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.311200 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-dns-svc\") pod \"355651da-79e8-4420-addf-f27c8ec3e9e7\" (UID: \"355651da-79e8-4420-addf-f27c8ec3e9e7\") " Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.314956 4728 scope.go:117] "RemoveContainer" containerID="4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e" Feb 04 11:45:16 crc kubenswrapper[4728]: E0204 11:45:16.317936 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e\": container with ID starting with 4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e not found: ID does not exist" containerID="4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.317980 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e"} err="failed to get container status \"4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e\": rpc error: code = NotFound desc = could not find container \"4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e\": container with ID starting with 4ef13b9cf3edabff71095b4013fb9153e17382c5d91afaad2940c74d8ed3738e not found: ID does not exist" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.318005 4728 scope.go:117] "RemoveContainer" containerID="e830bbb7bccc597ae4fa0552d5e3a3839ce40fcd064e55a0b70c8ded169e9213" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.317993 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355651da-79e8-4420-addf-f27c8ec3e9e7-kube-api-access-x655w" (OuterVolumeSpecName: "kube-api-access-x655w") pod "355651da-79e8-4420-addf-f27c8ec3e9e7" (UID: "355651da-79e8-4420-addf-f27c8ec3e9e7"). InnerVolumeSpecName "kube-api-access-x655w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:16 crc kubenswrapper[4728]: E0204 11:45:16.318671 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e830bbb7bccc597ae4fa0552d5e3a3839ce40fcd064e55a0b70c8ded169e9213\": container with ID starting with e830bbb7bccc597ae4fa0552d5e3a3839ce40fcd064e55a0b70c8ded169e9213 not found: ID does not exist" containerID="e830bbb7bccc597ae4fa0552d5e3a3839ce40fcd064e55a0b70c8ded169e9213" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.318719 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e830bbb7bccc597ae4fa0552d5e3a3839ce40fcd064e55a0b70c8ded169e9213"} err="failed to get container status \"e830bbb7bccc597ae4fa0552d5e3a3839ce40fcd064e55a0b70c8ded169e9213\": rpc error: code = NotFound desc = could not find container \"e830bbb7bccc597ae4fa0552d5e3a3839ce40fcd064e55a0b70c8ded169e9213\": container with ID starting with e830bbb7bccc597ae4fa0552d5e3a3839ce40fcd064e55a0b70c8ded169e9213 not found: ID does not exist" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.392310 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "355651da-79e8-4420-addf-f27c8ec3e9e7" (UID: "355651da-79e8-4420-addf-f27c8ec3e9e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.412877 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.412912 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x655w\" (UniqueName: \"kubernetes.io/projected/355651da-79e8-4420-addf-f27c8ec3e9e7-kube-api-access-x655w\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.442875 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-config" (OuterVolumeSpecName: "config") pod "355651da-79e8-4420-addf-f27c8ec3e9e7" (UID: "355651da-79e8-4420-addf-f27c8ec3e9e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.478380 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "355651da-79e8-4420-addf-f27c8ec3e9e7" (UID: "355651da-79e8-4420-addf-f27c8ec3e9e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.523901 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.524217 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/355651da-79e8-4420-addf-f27c8ec3e9e7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.607336 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bms9x"] Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.613258 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bms9x"] Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.725920 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9211-account-create-update-bgnxc" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.749791 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-74nts"] Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.751099 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ktzhs" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.758644 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ec4e-account-create-update-l68lt" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.759460 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-74nts"] Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.834261 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e29c6f-daef-4720-805c-a5889be741e0-operator-scripts\") pod \"83e29c6f-daef-4720-805c-a5889be741e0\" (UID: \"83e29c6f-daef-4720-805c-a5889be741e0\") " Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.834345 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8222\" (UniqueName: \"kubernetes.io/projected/83e29c6f-daef-4720-805c-a5889be741e0-kube-api-access-s8222\") pod \"83e29c6f-daef-4720-805c-a5889be741e0\" (UID: \"83e29c6f-daef-4720-805c-a5889be741e0\") " Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.834954 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83e29c6f-daef-4720-805c-a5889be741e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83e29c6f-daef-4720-805c-a5889be741e0" (UID: "83e29c6f-daef-4720-805c-a5889be741e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.838720 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e29c6f-daef-4720-805c-a5889be741e0-kube-api-access-s8222" (OuterVolumeSpecName: "kube-api-access-s8222") pod "83e29c6f-daef-4720-805c-a5889be741e0" (UID: "83e29c6f-daef-4720-805c-a5889be741e0"). InnerVolumeSpecName "kube-api-access-s8222". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.936394 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e18986c-826b-4478-a01a-29fcce1f946f-operator-scripts\") pod \"6e18986c-826b-4478-a01a-29fcce1f946f\" (UID: \"6e18986c-826b-4478-a01a-29fcce1f946f\") " Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.936576 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc936e27-7f59-4b4f-af88-3489bac544c0-operator-scripts\") pod \"bc936e27-7f59-4b4f-af88-3489bac544c0\" (UID: \"bc936e27-7f59-4b4f-af88-3489bac544c0\") " Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.936654 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tls8f\" (UniqueName: \"kubernetes.io/projected/6e18986c-826b-4478-a01a-29fcce1f946f-kube-api-access-tls8f\") pod \"6e18986c-826b-4478-a01a-29fcce1f946f\" (UID: \"6e18986c-826b-4478-a01a-29fcce1f946f\") " Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.936673 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw2b6\" (UniqueName: \"kubernetes.io/projected/bc936e27-7f59-4b4f-af88-3489bac544c0-kube-api-access-vw2b6\") pod \"bc936e27-7f59-4b4f-af88-3489bac544c0\" (UID: \"bc936e27-7f59-4b4f-af88-3489bac544c0\") " Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.936990 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e18986c-826b-4478-a01a-29fcce1f946f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e18986c-826b-4478-a01a-29fcce1f946f" (UID: "6e18986c-826b-4478-a01a-29fcce1f946f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.937100 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc936e27-7f59-4b4f-af88-3489bac544c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc936e27-7f59-4b4f-af88-3489bac544c0" (UID: "bc936e27-7f59-4b4f-af88-3489bac544c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.938144 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc936e27-7f59-4b4f-af88-3489bac544c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.938177 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e29c6f-daef-4720-805c-a5889be741e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.938191 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8222\" (UniqueName: \"kubernetes.io/projected/83e29c6f-daef-4720-805c-a5889be741e0-kube-api-access-s8222\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.938206 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e18986c-826b-4478-a01a-29fcce1f946f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.939504 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc936e27-7f59-4b4f-af88-3489bac544c0-kube-api-access-vw2b6" (OuterVolumeSpecName: "kube-api-access-vw2b6") pod "bc936e27-7f59-4b4f-af88-3489bac544c0" (UID: "bc936e27-7f59-4b4f-af88-3489bac544c0"). InnerVolumeSpecName "kube-api-access-vw2b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:16 crc kubenswrapper[4728]: I0204 11:45:16.939544 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e18986c-826b-4478-a01a-29fcce1f946f-kube-api-access-tls8f" (OuterVolumeSpecName: "kube-api-access-tls8f") pod "6e18986c-826b-4478-a01a-29fcce1f946f" (UID: "6e18986c-826b-4478-a01a-29fcce1f946f"). InnerVolumeSpecName "kube-api-access-tls8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.039480 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tls8f\" (UniqueName: \"kubernetes.io/projected/6e18986c-826b-4478-a01a-29fcce1f946f-kube-api-access-tls8f\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.039514 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw2b6\" (UniqueName: \"kubernetes.io/projected/bc936e27-7f59-4b4f-af88-3489bac544c0-kube-api-access-vw2b6\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.260675 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ec4e-account-create-update-l68lt" event={"ID":"6e18986c-826b-4478-a01a-29fcce1f946f","Type":"ContainerDied","Data":"e0486115195724df3ae7e4e9a0139b8ef1ec8233d30dd2d4f1dd4f9de9e24dd2"} Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.260713 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0486115195724df3ae7e4e9a0139b8ef1ec8233d30dd2d4f1dd4f9de9e24dd2" Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.260734 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ec4e-account-create-update-l68lt" Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.262131 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9211-account-create-update-bgnxc" event={"ID":"83e29c6f-daef-4720-805c-a5889be741e0","Type":"ContainerDied","Data":"6603d1d95cbc9dcd4ad9a11c03c2a3b4865762aba06adb81938de66f1c8e97d5"} Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.262154 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9211-account-create-update-bgnxc" Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.262159 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6603d1d95cbc9dcd4ad9a11c03c2a3b4865762aba06adb81938de66f1c8e97d5" Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.263426 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ktzhs" event={"ID":"bc936e27-7f59-4b4f-af88-3489bac544c0","Type":"ContainerDied","Data":"df9bc7b66f076ceb022c453df4aa1ee8284e011323982d7edb063d92d09c5398"} Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.263449 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df9bc7b66f076ceb022c453df4aa1ee8284e011323982d7edb063d92d09c5398" Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.263472 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ktzhs" Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.564153 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355651da-79e8-4420-addf-f27c8ec3e9e7" path="/var/lib/kubelet/pods/355651da-79e8-4420-addf-f27c8ec3e9e7/volumes" Feb 04 11:45:17 crc kubenswrapper[4728]: I0204 11:45:17.565336 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3" path="/var/lib/kubelet/pods/feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3/volumes" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.936884 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-lwtzn"] Feb 04 11:45:18 crc kubenswrapper[4728]: E0204 11:45:18.937276 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937294 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: E0204 11:45:18.937313 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355651da-79e8-4420-addf-f27c8ec3e9e7" containerName="dnsmasq-dns" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937320 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="355651da-79e8-4420-addf-f27c8ec3e9e7" containerName="dnsmasq-dns" Feb 04 11:45:18 crc kubenswrapper[4728]: E0204 11:45:18.937332 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beac9f4f-a615-4244-9ba8-ded8ce531f3b" containerName="mariadb-database-create" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937339 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="beac9f4f-a615-4244-9ba8-ded8ce531f3b" containerName="mariadb-database-create" Feb 04 11:45:18 crc kubenswrapper[4728]: E0204 11:45:18.937356 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355651da-79e8-4420-addf-f27c8ec3e9e7" containerName="init" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937363 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="355651da-79e8-4420-addf-f27c8ec3e9e7" containerName="init" Feb 04 11:45:18 crc kubenswrapper[4728]: E0204 11:45:18.937374 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e29c6f-daef-4720-805c-a5889be741e0" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937381 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e29c6f-daef-4720-805c-a5889be741e0" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: E0204 11:45:18.937404 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc936e27-7f59-4b4f-af88-3489bac544c0" containerName="mariadb-database-create" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937412 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc936e27-7f59-4b4f-af88-3489bac544c0" containerName="mariadb-database-create" Feb 04 11:45:18 crc kubenswrapper[4728]: E0204 11:45:18.937423 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e18986c-826b-4478-a01a-29fcce1f946f" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937429 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e18986c-826b-4478-a01a-29fcce1f946f" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: E0204 11:45:18.937442 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937449 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937627 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc936e27-7f59-4b4f-af88-3489bac544c0" containerName="mariadb-database-create" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937649 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="beac9f4f-a615-4244-9ba8-ded8ce531f3b" containerName="mariadb-database-create" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937658 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e18986c-826b-4478-a01a-29fcce1f946f" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937671 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e29c6f-daef-4720-805c-a5889be741e0" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937680 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937694 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="355651da-79e8-4420-addf-f27c8ec3e9e7" containerName="dnsmasq-dns" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.937705 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb11fca-b60e-4b6d-91e2-dc1f4ca1ccc3" containerName="mariadb-account-create-update" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.938401 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.941874 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rnmhk" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.943401 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 04 11:45:18 crc kubenswrapper[4728]: I0204 11:45:18.945386 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-lwtzn"] Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.076516 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-config-data\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.076614 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5d66\" (UniqueName: \"kubernetes.io/projected/5552fcdf-e47f-47e8-acde-ed2e74f54188-kube-api-access-g5d66\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.076785 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-combined-ca-bundle\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.076869 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-db-sync-config-data\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.178700 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-config-data\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.178776 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5d66\" (UniqueName: \"kubernetes.io/projected/5552fcdf-e47f-47e8-acde-ed2e74f54188-kube-api-access-g5d66\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.178853 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-combined-ca-bundle\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.178882 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-db-sync-config-data\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.184928 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-config-data\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.188954 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-combined-ca-bundle\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.199204 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-db-sync-config-data\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.202421 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5d66\" (UniqueName: \"kubernetes.io/projected/5552fcdf-e47f-47e8-acde-ed2e74f54188-kube-api-access-g5d66\") pod \"glance-db-sync-lwtzn\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.277117 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:19 crc kubenswrapper[4728]: W0204 11:45:19.874730 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5552fcdf_e47f_47e8_acde_ed2e74f54188.slice/crio-897b10dc5e8366560ad41bbde2b4cd12c13adb61176dc8fd9872d3ec72dce29d WatchSource:0}: Error finding container 897b10dc5e8366560ad41bbde2b4cd12c13adb61176dc8fd9872d3ec72dce29d: Status 404 returned error can't find the container with id 897b10dc5e8366560ad41bbde2b4cd12c13adb61176dc8fd9872d3ec72dce29d Feb 04 11:45:19 crc kubenswrapper[4728]: I0204 11:45:19.883459 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-lwtzn"] Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.246699 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ld9hj"] Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.247631 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ld9hj" Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.249265 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.258375 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ld9hj"] Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.296354 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lwtzn" event={"ID":"5552fcdf-e47f-47e8-acde-ed2e74f54188","Type":"ContainerStarted","Data":"897b10dc5e8366560ad41bbde2b4cd12c13adb61176dc8fd9872d3ec72dce29d"} Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.398160 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/697a3857-e695-4f57-b8c7-634205c1a46e-operator-scripts\") pod \"root-account-create-update-ld9hj\" (UID: \"697a3857-e695-4f57-b8c7-634205c1a46e\") " pod="openstack/root-account-create-update-ld9hj" Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.398699 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2xzn\" (UniqueName: \"kubernetes.io/projected/697a3857-e695-4f57-b8c7-634205c1a46e-kube-api-access-l2xzn\") pod \"root-account-create-update-ld9hj\" (UID: \"697a3857-e695-4f57-b8c7-634205c1a46e\") " pod="openstack/root-account-create-update-ld9hj" Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.500776 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2xzn\" (UniqueName: \"kubernetes.io/projected/697a3857-e695-4f57-b8c7-634205c1a46e-kube-api-access-l2xzn\") pod \"root-account-create-update-ld9hj\" (UID: \"697a3857-e695-4f57-b8c7-634205c1a46e\") " pod="openstack/root-account-create-update-ld9hj" Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.500906 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/697a3857-e695-4f57-b8c7-634205c1a46e-operator-scripts\") pod \"root-account-create-update-ld9hj\" (UID: \"697a3857-e695-4f57-b8c7-634205c1a46e\") " pod="openstack/root-account-create-update-ld9hj" Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.501774 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/697a3857-e695-4f57-b8c7-634205c1a46e-operator-scripts\") pod \"root-account-create-update-ld9hj\" (UID: \"697a3857-e695-4f57-b8c7-634205c1a46e\") " pod="openstack/root-account-create-update-ld9hj" Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.533125 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2xzn\" (UniqueName: \"kubernetes.io/projected/697a3857-e695-4f57-b8c7-634205c1a46e-kube-api-access-l2xzn\") pod \"root-account-create-update-ld9hj\" (UID: \"697a3857-e695-4f57-b8c7-634205c1a46e\") " pod="openstack/root-account-create-update-ld9hj" Feb 04 11:45:20 crc kubenswrapper[4728]: I0204 11:45:20.567391 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ld9hj" Feb 04 11:45:21 crc kubenswrapper[4728]: I0204 11:45:21.000863 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ld9hj"] Feb 04 11:45:21 crc kubenswrapper[4728]: W0204 11:45:21.005696 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod697a3857_e695_4f57_b8c7_634205c1a46e.slice/crio-3c3a18e81f024956b33330cd8a740d366e6be48b320a0eb3e8664dc791ff4906 WatchSource:0}: Error finding container 3c3a18e81f024956b33330cd8a740d366e6be48b320a0eb3e8664dc791ff4906: Status 404 returned error can't find the container with id 3c3a18e81f024956b33330cd8a740d366e6be48b320a0eb3e8664dc791ff4906 Feb 04 11:45:21 crc kubenswrapper[4728]: I0204 11:45:21.306509 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ld9hj" event={"ID":"697a3857-e695-4f57-b8c7-634205c1a46e","Type":"ContainerStarted","Data":"3c3a18e81f024956b33330cd8a740d366e6be48b320a0eb3e8664dc791ff4906"} Feb 04 11:45:22 crc kubenswrapper[4728]: I0204 11:45:22.317081 4728 generic.go:334] "Generic (PLEG): container finished" podID="2cc0cb1f-508e-4ac0-b653-aeb03317bdd7" containerID="413afee1831e7ec1f0e2f6cf567ab79fb4a6213d3021e577182e64366d68a8b7" exitCode=0 Feb 04 11:45:22 crc kubenswrapper[4728]: I0204 11:45:22.317295 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kjwnv" event={"ID":"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7","Type":"ContainerDied","Data":"413afee1831e7ec1f0e2f6cf567ab79fb4a6213d3021e577182e64366d68a8b7"} Feb 04 11:45:22 crc kubenswrapper[4728]: I0204 11:45:22.318992 4728 generic.go:334] "Generic (PLEG): container finished" podID="697a3857-e695-4f57-b8c7-634205c1a46e" containerID="8927439f6e4657451e2df09e29e2f50b38aeaecf07546b4380a751b9a8c9a3d7" exitCode=0 Feb 04 11:45:22 crc kubenswrapper[4728]: I0204 11:45:22.319053 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ld9hj" event={"ID":"697a3857-e695-4f57-b8c7-634205c1a46e","Type":"ContainerDied","Data":"8927439f6e4657451e2df09e29e2f50b38aeaecf07546b4380a751b9a8c9a3d7"} Feb 04 11:45:22 crc kubenswrapper[4728]: I0204 11:45:22.330707 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:22 crc kubenswrapper[4728]: I0204 11:45:22.342248 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea4f2286-1f91-46b5-98af-0ca776207d16-etc-swift\") pod \"swift-storage-0\" (UID: \"ea4f2286-1f91-46b5-98af-0ca776207d16\") " pod="openstack/swift-storage-0" Feb 04 11:45:22 crc kubenswrapper[4728]: I0204 11:45:22.440310 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 04 11:45:22 crc kubenswrapper[4728]: I0204 11:45:22.971880 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 04 11:45:23 crc kubenswrapper[4728]: I0204 11:45:23.334461 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"48168d3d957529899c1e0e3db87fc300d41065ec95b2bfeff8e433b963509b2e"} Feb 04 11:45:23 crc kubenswrapper[4728]: I0204 11:45:23.675029 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q2pd5" podUID="6c7c1adf-4c02-42b4-997d-291a7d033983" containerName="ovn-controller" probeResult="failure" output=< Feb 04 11:45:23 crc kubenswrapper[4728]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 04 11:45:23 crc kubenswrapper[4728]: > Feb 04 11:45:23 crc kubenswrapper[4728]: I0204 11:45:23.890457 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ld9hj" Feb 04 11:45:23 crc kubenswrapper[4728]: I0204 11:45:23.898502 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.633731 4728 generic.go:334] "Generic (PLEG): container finished" podID="b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" containerID="e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9" exitCode=0 Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.633793 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c","Type":"ContainerDied","Data":"e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9"} Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.638937 4728 generic.go:334] "Generic (PLEG): container finished" podID="23b1eaab-360d-4438-b68d-0d61f21ff593" containerID="ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895" exitCode=0 Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.639217 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23b1eaab-360d-4438-b68d-0d61f21ff593","Type":"ContainerDied","Data":"ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895"} Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.642141 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kjwnv" event={"ID":"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7","Type":"ContainerDied","Data":"28b8ce6181236959a792fe60d4233d0ad7dd254fea51590b338912e10d10bae9"} Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.642170 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28b8ce6181236959a792fe60d4233d0ad7dd254fea51590b338912e10d10bae9" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.642213 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kjwnv" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.643237 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ld9hj" event={"ID":"697a3857-e695-4f57-b8c7-634205c1a46e","Type":"ContainerDied","Data":"3c3a18e81f024956b33330cd8a740d366e6be48b320a0eb3e8664dc791ff4906"} Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.643253 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c3a18e81f024956b33330cd8a740d366e6be48b320a0eb3e8664dc791ff4906" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.643283 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ld9hj" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.706967 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-scripts\") pod \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.707016 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-etc-swift\") pod \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.707045 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-ring-data-devices\") pod \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.707061 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-swiftconf\") pod \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.707090 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-dispersionconf\") pod \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.707135 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/697a3857-e695-4f57-b8c7-634205c1a46e-operator-scripts\") pod \"697a3857-e695-4f57-b8c7-634205c1a46e\" (UID: \"697a3857-e695-4f57-b8c7-634205c1a46e\") " Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.707172 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-combined-ca-bundle\") pod \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.707203 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2xzn\" (UniqueName: \"kubernetes.io/projected/697a3857-e695-4f57-b8c7-634205c1a46e-kube-api-access-l2xzn\") pod \"697a3857-e695-4f57-b8c7-634205c1a46e\" (UID: \"697a3857-e695-4f57-b8c7-634205c1a46e\") " Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.707313 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl7bc\" (UniqueName: \"kubernetes.io/projected/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-kube-api-access-vl7bc\") pod \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\" (UID: \"2cc0cb1f-508e-4ac0-b653-aeb03317bdd7\") " Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.708378 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697a3857-e695-4f57-b8c7-634205c1a46e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "697a3857-e695-4f57-b8c7-634205c1a46e" (UID: "697a3857-e695-4f57-b8c7-634205c1a46e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.708465 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7" (UID: "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.709036 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7" (UID: "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.715721 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7" (UID: "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.717688 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-kube-api-access-vl7bc" (OuterVolumeSpecName: "kube-api-access-vl7bc") pod "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7" (UID: "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7"). InnerVolumeSpecName "kube-api-access-vl7bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.720260 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697a3857-e695-4f57-b8c7-634205c1a46e-kube-api-access-l2xzn" (OuterVolumeSpecName: "kube-api-access-l2xzn") pod "697a3857-e695-4f57-b8c7-634205c1a46e" (UID: "697a3857-e695-4f57-b8c7-634205c1a46e"). InnerVolumeSpecName "kube-api-access-l2xzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.738418 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-scripts" (OuterVolumeSpecName: "scripts") pod "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7" (UID: "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.740204 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7" (UID: "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.760640 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7" (UID: "2cc0cb1f-508e-4ac0-b653-aeb03317bdd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.788832 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.810050 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl7bc\" (UniqueName: \"kubernetes.io/projected/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-kube-api-access-vl7bc\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.810085 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.810096 4728 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.810106 4728 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.810115 4728 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.810124 4728 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.810133 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/697a3857-e695-4f57-b8c7-634205c1a46e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.810142 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc0cb1f-508e-4ac0-b653-aeb03317bdd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:24 crc kubenswrapper[4728]: I0204 11:45:24.810151 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2xzn\" (UniqueName: \"kubernetes.io/projected/697a3857-e695-4f57-b8c7-634205c1a46e-kube-api-access-l2xzn\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:25 crc kubenswrapper[4728]: E0204 11:45:25.082978 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod697a3857_e695_4f57_b8c7_634205c1a46e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc0cb1f_508e_4ac0_b653_aeb03317bdd7.slice\": RecentStats: unable to find data in memory cache]" Feb 04 11:45:25 crc kubenswrapper[4728]: I0204 11:45:25.653063 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c","Type":"ContainerStarted","Data":"8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6"} Feb 04 11:45:25 crc kubenswrapper[4728]: I0204 11:45:25.653282 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 04 11:45:25 crc kubenswrapper[4728]: I0204 11:45:25.655557 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23b1eaab-360d-4438-b68d-0d61f21ff593","Type":"ContainerStarted","Data":"e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414"} Feb 04 11:45:25 crc kubenswrapper[4728]: I0204 11:45:25.655787 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:45:25 crc kubenswrapper[4728]: I0204 11:45:25.681521 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.455508037 podStartE2EDuration="57.68150054s" podCreationTimestamp="2026-02-04 11:44:28 +0000 UTC" firstStartedPulling="2026-02-04 11:44:42.104008257 +0000 UTC m=+1031.246712642" lastFinishedPulling="2026-02-04 11:44:49.33000076 +0000 UTC m=+1038.472705145" observedRunningTime="2026-02-04 11:45:25.67684264 +0000 UTC m=+1074.819547045" watchObservedRunningTime="2026-02-04 11:45:25.68150054 +0000 UTC m=+1074.824204925" Feb 04 11:45:25 crc kubenswrapper[4728]: I0204 11:45:25.707944 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=49.143969672 podStartE2EDuration="56.707928659s" podCreationTimestamp="2026-02-04 11:44:29 +0000 UTC" firstStartedPulling="2026-02-04 11:44:41.912342331 +0000 UTC m=+1031.055046716" lastFinishedPulling="2026-02-04 11:44:49.476301318 +0000 UTC m=+1038.619005703" observedRunningTime="2026-02-04 11:45:25.699902268 +0000 UTC m=+1074.842606643" watchObservedRunningTime="2026-02-04 11:45:25.707928659 +0000 UTC m=+1074.850633044" Feb 04 11:45:26 crc kubenswrapper[4728]: I0204 11:45:26.681950 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"a7d35057a97221c9403fe6c19eed0bfae9a6d172092df8d5dac267c6bf997f51"} Feb 04 11:45:26 crc kubenswrapper[4728]: I0204 11:45:26.682297 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"d9de2d8ce0e1fda9e55d4274b5656322ca3de4894cc3001a85b87ad6a094c63f"} Feb 04 11:45:26 crc kubenswrapper[4728]: I0204 11:45:26.682313 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"2d4419d337f5b60e58081413bcbe63d64ad043f23285ccc6428c4daf8df1bee4"} Feb 04 11:45:26 crc kubenswrapper[4728]: I0204 11:45:26.682323 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"ee32b205b3ac3dfd3519e8576303a1c20f17d9d917b6c3c9e4d6720575220c17"} Feb 04 11:45:26 crc kubenswrapper[4728]: I0204 11:45:26.828151 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ld9hj"] Feb 04 11:45:26 crc kubenswrapper[4728]: I0204 11:45:26.836117 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ld9hj"] Feb 04 11:45:27 crc kubenswrapper[4728]: I0204 11:45:27.563738 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="697a3857-e695-4f57-b8c7-634205c1a46e" path="/var/lib/kubelet/pods/697a3857-e695-4f57-b8c7-634205c1a46e/volumes" Feb 04 11:45:28 crc kubenswrapper[4728]: I0204 11:45:28.676188 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q2pd5" podUID="6c7c1adf-4c02-42b4-997d-291a7d033983" containerName="ovn-controller" probeResult="failure" output=< Feb 04 11:45:28 crc kubenswrapper[4728]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 04 11:45:28 crc kubenswrapper[4728]: > Feb 04 11:45:31 crc kubenswrapper[4728]: I0204 11:45:31.807839 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-q77rh"] Feb 04 11:45:31 crc kubenswrapper[4728]: E0204 11:45:31.808370 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697a3857-e695-4f57-b8c7-634205c1a46e" containerName="mariadb-account-create-update" Feb 04 11:45:31 crc kubenswrapper[4728]: I0204 11:45:31.808383 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="697a3857-e695-4f57-b8c7-634205c1a46e" containerName="mariadb-account-create-update" Feb 04 11:45:31 crc kubenswrapper[4728]: E0204 11:45:31.808397 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc0cb1f-508e-4ac0-b653-aeb03317bdd7" containerName="swift-ring-rebalance" Feb 04 11:45:31 crc kubenswrapper[4728]: I0204 11:45:31.808403 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc0cb1f-508e-4ac0-b653-aeb03317bdd7" containerName="swift-ring-rebalance" Feb 04 11:45:31 crc kubenswrapper[4728]: I0204 11:45:31.808556 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="697a3857-e695-4f57-b8c7-634205c1a46e" containerName="mariadb-account-create-update" Feb 04 11:45:31 crc kubenswrapper[4728]: I0204 11:45:31.808567 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc0cb1f-508e-4ac0-b653-aeb03317bdd7" containerName="swift-ring-rebalance" Feb 04 11:45:31 crc kubenswrapper[4728]: I0204 11:45:31.809122 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q77rh" Feb 04 11:45:31 crc kubenswrapper[4728]: I0204 11:45:31.811379 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 04 11:45:31 crc kubenswrapper[4728]: I0204 11:45:31.816626 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q77rh"] Feb 04 11:45:31 crc kubenswrapper[4728]: I0204 11:45:31.931814 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxt7\" (UniqueName: \"kubernetes.io/projected/6a480c5f-4928-4a57-bb8a-7c5017d06563-kube-api-access-6vxt7\") pod \"root-account-create-update-q77rh\" (UID: \"6a480c5f-4928-4a57-bb8a-7c5017d06563\") " pod="openstack/root-account-create-update-q77rh" Feb 04 11:45:31 crc kubenswrapper[4728]: I0204 11:45:31.931920 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a480c5f-4928-4a57-bb8a-7c5017d06563-operator-scripts\") pod \"root-account-create-update-q77rh\" (UID: \"6a480c5f-4928-4a57-bb8a-7c5017d06563\") " pod="openstack/root-account-create-update-q77rh" Feb 04 11:45:32 crc kubenswrapper[4728]: I0204 11:45:32.033280 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vxt7\" (UniqueName: \"kubernetes.io/projected/6a480c5f-4928-4a57-bb8a-7c5017d06563-kube-api-access-6vxt7\") pod \"root-account-create-update-q77rh\" (UID: \"6a480c5f-4928-4a57-bb8a-7c5017d06563\") " pod="openstack/root-account-create-update-q77rh" Feb 04 11:45:32 crc kubenswrapper[4728]: I0204 11:45:32.033343 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a480c5f-4928-4a57-bb8a-7c5017d06563-operator-scripts\") pod \"root-account-create-update-q77rh\" (UID: \"6a480c5f-4928-4a57-bb8a-7c5017d06563\") " pod="openstack/root-account-create-update-q77rh" Feb 04 11:45:32 crc kubenswrapper[4728]: I0204 11:45:32.034011 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a480c5f-4928-4a57-bb8a-7c5017d06563-operator-scripts\") pod \"root-account-create-update-q77rh\" (UID: \"6a480c5f-4928-4a57-bb8a-7c5017d06563\") " pod="openstack/root-account-create-update-q77rh" Feb 04 11:45:32 crc kubenswrapper[4728]: I0204 11:45:32.061916 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vxt7\" (UniqueName: \"kubernetes.io/projected/6a480c5f-4928-4a57-bb8a-7c5017d06563-kube-api-access-6vxt7\") pod \"root-account-create-update-q77rh\" (UID: \"6a480c5f-4928-4a57-bb8a-7c5017d06563\") " pod="openstack/root-account-create-update-q77rh" Feb 04 11:45:32 crc kubenswrapper[4728]: I0204 11:45:32.138947 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q77rh" Feb 04 11:45:33 crc kubenswrapper[4728]: I0204 11:45:33.694816 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:45:33 crc kubenswrapper[4728]: I0204 11:45:33.713629 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q2pd5" podUID="6c7c1adf-4c02-42b4-997d-291a7d033983" containerName="ovn-controller" probeResult="failure" output=< Feb 04 11:45:33 crc kubenswrapper[4728]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 04 11:45:33 crc kubenswrapper[4728]: > Feb 04 11:45:33 crc kubenswrapper[4728]: I0204 11:45:33.727061 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mf6rw" Feb 04 11:45:33 crc kubenswrapper[4728]: I0204 11:45:33.874994 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q77rh"] Feb 04 11:45:33 crc kubenswrapper[4728]: I0204 11:45:33.948298 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q2pd5-config-ll7hl"] Feb 04 11:45:33 crc kubenswrapper[4728]: I0204 11:45:33.949275 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:33 crc kubenswrapper[4728]: I0204 11:45:33.951629 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 04 11:45:33 crc kubenswrapper[4728]: I0204 11:45:33.962412 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q2pd5-config-ll7hl"] Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.061929 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-log-ovn\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.061981 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-additional-scripts\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.062081 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.062112 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run-ovn\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.062432 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svtz6\" (UniqueName: \"kubernetes.io/projected/2534e5c0-b817-428c-9cd6-29b3a998d6f6-kube-api-access-svtz6\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.062645 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-scripts\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: W0204 11:45:34.090286 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a480c5f_4928_4a57_bb8a_7c5017d06563.slice/crio-7d095b977d6a2d844895f479e743a52c973da5d0e221e393420cc9553d456464 WatchSource:0}: Error finding container 7d095b977d6a2d844895f479e743a52c973da5d0e221e393420cc9553d456464: Status 404 returned error can't find the container with id 7d095b977d6a2d844895f479e743a52c973da5d0e221e393420cc9553d456464 Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.164137 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.164193 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run-ovn\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.164259 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svtz6\" (UniqueName: \"kubernetes.io/projected/2534e5c0-b817-428c-9cd6-29b3a998d6f6-kube-api-access-svtz6\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.164290 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-scripts\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.164350 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-log-ovn\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.164376 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-additional-scripts\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.164641 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run-ovn\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.164689 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-log-ovn\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.164739 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.165191 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-additional-scripts\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.166110 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-scripts\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.181076 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svtz6\" (UniqueName: \"kubernetes.io/projected/2534e5c0-b817-428c-9cd6-29b3a998d6f6-kube-api-access-svtz6\") pod \"ovn-controller-q2pd5-config-ll7hl\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.279025 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.750539 4728 generic.go:334] "Generic (PLEG): container finished" podID="6a480c5f-4928-4a57-bb8a-7c5017d06563" containerID="41a7f8a2fe5ac89a48c85530baf21f0280aef28ed16598cbfeafa294b252759f" exitCode=0 Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.751287 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q77rh" event={"ID":"6a480c5f-4928-4a57-bb8a-7c5017d06563","Type":"ContainerDied","Data":"41a7f8a2fe5ac89a48c85530baf21f0280aef28ed16598cbfeafa294b252759f"} Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.752000 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q77rh" event={"ID":"6a480c5f-4928-4a57-bb8a-7c5017d06563","Type":"ContainerStarted","Data":"7d095b977d6a2d844895f479e743a52c973da5d0e221e393420cc9553d456464"} Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.756344 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"75e3f4005b25dbfef4044709d01b4bbfaa7a59847f495f772cc3e99f538c1b7d"} Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.756390 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"1fe5f1c5d31660f1aa9f6eece2f699412631456be00dad2b4e4de68aa5cb7968"} Feb 04 11:45:34 crc kubenswrapper[4728]: I0204 11:45:34.775366 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q2pd5-config-ll7hl"] Feb 04 11:45:34 crc kubenswrapper[4728]: W0204 11:45:34.805622 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2534e5c0_b817_428c_9cd6_29b3a998d6f6.slice/crio-6fda1c81253447d54a9ae5be6a2c47428940cf3a210492ced17e8f8b2221f8c1 WatchSource:0}: Error finding container 6fda1c81253447d54a9ae5be6a2c47428940cf3a210492ced17e8f8b2221f8c1: Status 404 returned error can't find the container with id 6fda1c81253447d54a9ae5be6a2c47428940cf3a210492ced17e8f8b2221f8c1 Feb 04 11:45:35 crc kubenswrapper[4728]: I0204 11:45:35.767847 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"b81790d828fb521a9cb96a6598632c290dddf1454ab7bcc1c00b570c91ff715c"} Feb 04 11:45:35 crc kubenswrapper[4728]: I0204 11:45:35.768431 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"d15ab6bfb0c574972ccb4b1db5ba9615f707c267105dc1f05fa7d4bd1bafdcfd"} Feb 04 11:45:35 crc kubenswrapper[4728]: I0204 11:45:35.769886 4728 generic.go:334] "Generic (PLEG): container finished" podID="2534e5c0-b817-428c-9cd6-29b3a998d6f6" containerID="6af1b370baa1db28f086f14c8147be9a960bb5a6a385fa381dedf5bc0f8f6d10" exitCode=0 Feb 04 11:45:35 crc kubenswrapper[4728]: I0204 11:45:35.769943 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q2pd5-config-ll7hl" event={"ID":"2534e5c0-b817-428c-9cd6-29b3a998d6f6","Type":"ContainerDied","Data":"6af1b370baa1db28f086f14c8147be9a960bb5a6a385fa381dedf5bc0f8f6d10"} Feb 04 11:45:35 crc kubenswrapper[4728]: I0204 11:45:35.769962 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q2pd5-config-ll7hl" event={"ID":"2534e5c0-b817-428c-9cd6-29b3a998d6f6","Type":"ContainerStarted","Data":"6fda1c81253447d54a9ae5be6a2c47428940cf3a210492ced17e8f8b2221f8c1"} Feb 04 11:45:35 crc kubenswrapper[4728]: I0204 11:45:35.772654 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lwtzn" event={"ID":"5552fcdf-e47f-47e8-acde-ed2e74f54188","Type":"ContainerStarted","Data":"cac95b3f8edc828472f9a5ecb75eb070d6c5dc25c7500917acb5f54c9dff69c1"} Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.432844 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q77rh" Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.457584 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-lwtzn" podStartSLOduration=4.236237181 podStartE2EDuration="18.457568423s" podCreationTimestamp="2026-02-04 11:45:18 +0000 UTC" firstStartedPulling="2026-02-04 11:45:19.877324947 +0000 UTC m=+1069.020029332" lastFinishedPulling="2026-02-04 11:45:34.098656189 +0000 UTC m=+1083.241360574" observedRunningTime="2026-02-04 11:45:35.80749681 +0000 UTC m=+1084.950201235" watchObservedRunningTime="2026-02-04 11:45:36.457568423 +0000 UTC m=+1085.600272808" Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.521717 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vxt7\" (UniqueName: \"kubernetes.io/projected/6a480c5f-4928-4a57-bb8a-7c5017d06563-kube-api-access-6vxt7\") pod \"6a480c5f-4928-4a57-bb8a-7c5017d06563\" (UID: \"6a480c5f-4928-4a57-bb8a-7c5017d06563\") " Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.522046 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a480c5f-4928-4a57-bb8a-7c5017d06563-operator-scripts\") pod \"6a480c5f-4928-4a57-bb8a-7c5017d06563\" (UID: \"6a480c5f-4928-4a57-bb8a-7c5017d06563\") " Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.522676 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a480c5f-4928-4a57-bb8a-7c5017d06563-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a480c5f-4928-4a57-bb8a-7c5017d06563" (UID: "6a480c5f-4928-4a57-bb8a-7c5017d06563"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.525993 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a480c5f-4928-4a57-bb8a-7c5017d06563-kube-api-access-6vxt7" (OuterVolumeSpecName: "kube-api-access-6vxt7") pod "6a480c5f-4928-4a57-bb8a-7c5017d06563" (UID: "6a480c5f-4928-4a57-bb8a-7c5017d06563"). InnerVolumeSpecName "kube-api-access-6vxt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.623942 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a480c5f-4928-4a57-bb8a-7c5017d06563-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.623989 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vxt7\" (UniqueName: \"kubernetes.io/projected/6a480c5f-4928-4a57-bb8a-7c5017d06563-kube-api-access-6vxt7\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.795991 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"c92d2e443984d6f731e9500a78c73c8b92093cf4bbcc7485f1ce14b4ef72e2d0"} Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.796044 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"e8caa8947002972df70a8b7028f5d928ce13156b7d7fe985b9baee1fd7c626ef"} Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.799147 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q77rh" event={"ID":"6a480c5f-4928-4a57-bb8a-7c5017d06563","Type":"ContainerDied","Data":"7d095b977d6a2d844895f479e743a52c973da5d0e221e393420cc9553d456464"} Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.799193 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d095b977d6a2d844895f479e743a52c973da5d0e221e393420cc9553d456464" Feb 04 11:45:36 crc kubenswrapper[4728]: I0204 11:45:36.799209 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q77rh" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.064229 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.134902 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svtz6\" (UniqueName: \"kubernetes.io/projected/2534e5c0-b817-428c-9cd6-29b3a998d6f6-kube-api-access-svtz6\") pod \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.134964 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-log-ovn\") pod \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.135055 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run\") pod \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.135112 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2534e5c0-b817-428c-9cd6-29b3a998d6f6" (UID: "2534e5c0-b817-428c-9cd6-29b3a998d6f6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.135135 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-scripts\") pod \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.135156 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run-ovn\") pod \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.135173 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-additional-scripts\") pod \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\" (UID: \"2534e5c0-b817-428c-9cd6-29b3a998d6f6\") " Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.135234 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2534e5c0-b817-428c-9cd6-29b3a998d6f6" (UID: "2534e5c0-b817-428c-9cd6-29b3a998d6f6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.135226 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run" (OuterVolumeSpecName: "var-run") pod "2534e5c0-b817-428c-9cd6-29b3a998d6f6" (UID: "2534e5c0-b817-428c-9cd6-29b3a998d6f6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.135486 4728 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.135497 4728 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.135505 4728 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2534e5c0-b817-428c-9cd6-29b3a998d6f6-var-run\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.135732 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2534e5c0-b817-428c-9cd6-29b3a998d6f6" (UID: "2534e5c0-b817-428c-9cd6-29b3a998d6f6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.136098 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-scripts" (OuterVolumeSpecName: "scripts") pod "2534e5c0-b817-428c-9cd6-29b3a998d6f6" (UID: "2534e5c0-b817-428c-9cd6-29b3a998d6f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.138953 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2534e5c0-b817-428c-9cd6-29b3a998d6f6-kube-api-access-svtz6" (OuterVolumeSpecName: "kube-api-access-svtz6") pod "2534e5c0-b817-428c-9cd6-29b3a998d6f6" (UID: "2534e5c0-b817-428c-9cd6-29b3a998d6f6"). InnerVolumeSpecName "kube-api-access-svtz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.236791 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svtz6\" (UniqueName: \"kubernetes.io/projected/2534e5c0-b817-428c-9cd6-29b3a998d6f6-kube-api-access-svtz6\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.236827 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.236837 4728 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2534e5c0-b817-428c-9cd6-29b3a998d6f6-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.809811 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q2pd5-config-ll7hl" event={"ID":"2534e5c0-b817-428c-9cd6-29b3a998d6f6","Type":"ContainerDied","Data":"6fda1c81253447d54a9ae5be6a2c47428940cf3a210492ced17e8f8b2221f8c1"} Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.809851 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q2pd5-config-ll7hl" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.809867 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fda1c81253447d54a9ae5be6a2c47428940cf3a210492ced17e8f8b2221f8c1" Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.817407 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"e46524b6d113b550d771b3a28d8a75e2cb09262e74f62cdc08a0d41116990a97"} Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.817455 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"7e253dbefa3e2c6d062f34ed160f0bd931219b87e14ffd3e6e9e89eabd2c77a5"} Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.817468 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"3509fda22e141864276951b97e360976b59047b5e03a36b639a2d8f18c4f6cfa"} Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.817480 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"1e1b15dbb069c3cb445d19ed73c0ff3c06921f63b74e8c193c17f86022e8e098"} Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.817491 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea4f2286-1f91-46b5-98af-0ca776207d16","Type":"ContainerStarted","Data":"c971eda4f378e5713e4c84a37fae4a2992af4c7c4a85dc5a39e7fb564f4fad9c"} Feb 04 11:45:37 crc kubenswrapper[4728]: I0204 11:45:37.857938 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.510892305 podStartE2EDuration="32.857919062s" podCreationTimestamp="2026-02-04 11:45:05 +0000 UTC" firstStartedPulling="2026-02-04 11:45:22.978887796 +0000 UTC m=+1072.121592191" lastFinishedPulling="2026-02-04 11:45:36.325914563 +0000 UTC m=+1085.468618948" observedRunningTime="2026-02-04 11:45:37.854190013 +0000 UTC m=+1086.996894408" watchObservedRunningTime="2026-02-04 11:45:37.857919062 +0000 UTC m=+1087.000623447" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.177238 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-w8kwc"] Feb 04 11:45:38 crc kubenswrapper[4728]: E0204 11:45:38.179867 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2534e5c0-b817-428c-9cd6-29b3a998d6f6" containerName="ovn-config" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.179899 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2534e5c0-b817-428c-9cd6-29b3a998d6f6" containerName="ovn-config" Feb 04 11:45:38 crc kubenswrapper[4728]: E0204 11:45:38.179923 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a480c5f-4928-4a57-bb8a-7c5017d06563" containerName="mariadb-account-create-update" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.179933 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a480c5f-4928-4a57-bb8a-7c5017d06563" containerName="mariadb-account-create-update" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.180733 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2534e5c0-b817-428c-9cd6-29b3a998d6f6" containerName="ovn-config" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.180795 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a480c5f-4928-4a57-bb8a-7c5017d06563" containerName="mariadb-account-create-update" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.182836 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.185453 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.217089 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-w8kwc"] Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.233602 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q2pd5-config-ll7hl"] Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.245242 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q2pd5-config-ll7hl"] Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.253807 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-config\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.253875 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.253936 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.254051 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8s8w\" (UniqueName: \"kubernetes.io/projected/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-kube-api-access-p8s8w\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.254133 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.254194 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.355089 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.355162 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.355193 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-config\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.355215 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.355244 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.355292 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8s8w\" (UniqueName: \"kubernetes.io/projected/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-kube-api-access-p8s8w\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.356142 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.358293 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-config\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.361592 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.362030 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.362030 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.374063 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8s8w\" (UniqueName: \"kubernetes.io/projected/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-kube-api-access-p8s8w\") pod \"dnsmasq-dns-764c5664d7-w8kwc\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.524307 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:38 crc kubenswrapper[4728]: I0204 11:45:38.707934 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-q2pd5" Feb 04 11:45:39 crc kubenswrapper[4728]: I0204 11:45:39.100077 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-w8kwc"] Feb 04 11:45:39 crc kubenswrapper[4728]: W0204 11:45:39.106587 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58be142e_d49a_44fb_81e4_dc38bc4ea3d1.slice/crio-b2b2e20ed103b3fac8d24b4c53af9801003ddb4a66a008068858b9c36b0ed592 WatchSource:0}: Error finding container b2b2e20ed103b3fac8d24b4c53af9801003ddb4a66a008068858b9c36b0ed592: Status 404 returned error can't find the container with id b2b2e20ed103b3fac8d24b4c53af9801003ddb4a66a008068858b9c36b0ed592 Feb 04 11:45:39 crc kubenswrapper[4728]: I0204 11:45:39.563335 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2534e5c0-b817-428c-9cd6-29b3a998d6f6" path="/var/lib/kubelet/pods/2534e5c0-b817-428c-9cd6-29b3a998d6f6/volumes" Feb 04 11:45:39 crc kubenswrapper[4728]: I0204 11:45:39.833554 4728 generic.go:334] "Generic (PLEG): container finished" podID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" containerID="b6c4c83e0215046df1192e7145020a38d03ad4da0cd913d08e6399c3558d03d7" exitCode=0 Feb 04 11:45:39 crc kubenswrapper[4728]: I0204 11:45:39.833601 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" event={"ID":"58be142e-d49a-44fb-81e4-dc38bc4ea3d1","Type":"ContainerDied","Data":"b6c4c83e0215046df1192e7145020a38d03ad4da0cd913d08e6399c3558d03d7"} Feb 04 11:45:39 crc kubenswrapper[4728]: I0204 11:45:39.833630 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" event={"ID":"58be142e-d49a-44fb-81e4-dc38bc4ea3d1","Type":"ContainerStarted","Data":"b2b2e20ed103b3fac8d24b4c53af9801003ddb4a66a008068858b9c36b0ed592"} Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.189972 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.513950 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.606078 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-kvjt5"] Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.607256 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kvjt5" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.619166 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-kvjt5"] Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.701530 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3528d762-0e78-4914-ae55-f11bb812f322-operator-scripts\") pod \"heat-db-create-kvjt5\" (UID: \"3528d762-0e78-4914-ae55-f11bb812f322\") " pod="openstack/heat-db-create-kvjt5" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.701643 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz7cv\" (UniqueName: \"kubernetes.io/projected/3528d762-0e78-4914-ae55-f11bb812f322-kube-api-access-bz7cv\") pod \"heat-db-create-kvjt5\" (UID: \"3528d762-0e78-4914-ae55-f11bb812f322\") " pod="openstack/heat-db-create-kvjt5" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.785497 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-gzzfl"] Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.786858 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gzzfl" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.799186 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0249-account-create-update-jjdf2"] Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.800480 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0249-account-create-update-jjdf2" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.802453 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz7cv\" (UniqueName: \"kubernetes.io/projected/3528d762-0e78-4914-ae55-f11bb812f322-kube-api-access-bz7cv\") pod \"heat-db-create-kvjt5\" (UID: \"3528d762-0e78-4914-ae55-f11bb812f322\") " pod="openstack/heat-db-create-kvjt5" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.802635 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3528d762-0e78-4914-ae55-f11bb812f322-operator-scripts\") pod \"heat-db-create-kvjt5\" (UID: \"3528d762-0e78-4914-ae55-f11bb812f322\") " pod="openstack/heat-db-create-kvjt5" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.803344 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3528d762-0e78-4914-ae55-f11bb812f322-operator-scripts\") pod \"heat-db-create-kvjt5\" (UID: \"3528d762-0e78-4914-ae55-f11bb812f322\") " pod="openstack/heat-db-create-kvjt5" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.812297 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.812601 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gzzfl"] Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.822177 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0249-account-create-update-jjdf2"] Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.856024 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz7cv\" (UniqueName: \"kubernetes.io/projected/3528d762-0e78-4914-ae55-f11bb812f322-kube-api-access-bz7cv\") pod \"heat-db-create-kvjt5\" (UID: \"3528d762-0e78-4914-ae55-f11bb812f322\") " pod="openstack/heat-db-create-kvjt5" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.896951 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" event={"ID":"58be142e-d49a-44fb-81e4-dc38bc4ea3d1","Type":"ContainerStarted","Data":"c3292f2c90f002aae96298b626c233aa790957652ad7a006bda81763bf464698"} Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.897225 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.908377 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-f565-account-create-update-hn9mk"] Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.909518 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81194f90-78b9-463c-a83e-adce1621a8ec-operator-scripts\") pod \"barbican-0249-account-create-update-jjdf2\" (UID: \"81194f90-78b9-463c-a83e-adce1621a8ec\") " pod="openstack/barbican-0249-account-create-update-jjdf2" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.909697 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37fac279-e557-4505-8a93-d7610f2326f0-operator-scripts\") pod \"cinder-db-create-gzzfl\" (UID: \"37fac279-e557-4505-8a93-d7610f2326f0\") " pod="openstack/cinder-db-create-gzzfl" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.909949 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94nwg\" (UniqueName: \"kubernetes.io/projected/81194f90-78b9-463c-a83e-adce1621a8ec-kube-api-access-94nwg\") pod \"barbican-0249-account-create-update-jjdf2\" (UID: \"81194f90-78b9-463c-a83e-adce1621a8ec\") " pod="openstack/barbican-0249-account-create-update-jjdf2" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.910302 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhz6w\" (UniqueName: \"kubernetes.io/projected/37fac279-e557-4505-8a93-d7610f2326f0-kube-api-access-dhz6w\") pod \"cinder-db-create-gzzfl\" (UID: \"37fac279-e557-4505-8a93-d7610f2326f0\") " pod="openstack/cinder-db-create-gzzfl" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.910976 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f565-account-create-update-hn9mk" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.913452 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.928514 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mg5hg"] Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.942524 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kvjt5" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.946150 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mg5hg" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.973812 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qdzqf"] Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.974885 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.977263 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.978126 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.978296 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.978607 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-prll6" Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.981343 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-f565-account-create-update-hn9mk"] Feb 04 11:45:40 crc kubenswrapper[4728]: I0204 11:45:40.986327 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qdzqf"] Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.011950 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zbw5\" (UniqueName: \"kubernetes.io/projected/a84078cf-9bd8-4920-9537-c4d1f863e4b2-kube-api-access-5zbw5\") pod \"heat-f565-account-create-update-hn9mk\" (UID: \"a84078cf-9bd8-4920-9537-c4d1f863e4b2\") " pod="openstack/heat-f565-account-create-update-hn9mk" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.012100 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlvqn\" (UniqueName: \"kubernetes.io/projected/d8279c44-c9f5-40f7-a933-bf7b91f30750-kube-api-access-hlvqn\") pod \"keystone-db-sync-qdzqf\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.012191 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9cdbe1-394e-4100-ad1c-53851f9955fb-operator-scripts\") pod \"barbican-db-create-mg5hg\" (UID: \"1c9cdbe1-394e-4100-ad1c-53851f9955fb\") " pod="openstack/barbican-db-create-mg5hg" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.012237 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81194f90-78b9-463c-a83e-adce1621a8ec-operator-scripts\") pod \"barbican-0249-account-create-update-jjdf2\" (UID: \"81194f90-78b9-463c-a83e-adce1621a8ec\") " pod="openstack/barbican-0249-account-create-update-jjdf2" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.012292 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37fac279-e557-4505-8a93-d7610f2326f0-operator-scripts\") pod \"cinder-db-create-gzzfl\" (UID: \"37fac279-e557-4505-8a93-d7610f2326f0\") " pod="openstack/cinder-db-create-gzzfl" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.012417 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjwv2\" (UniqueName: \"kubernetes.io/projected/1c9cdbe1-394e-4100-ad1c-53851f9955fb-kube-api-access-qjwv2\") pod \"barbican-db-create-mg5hg\" (UID: \"1c9cdbe1-394e-4100-ad1c-53851f9955fb\") " pod="openstack/barbican-db-create-mg5hg" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.012493 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94nwg\" (UniqueName: \"kubernetes.io/projected/81194f90-78b9-463c-a83e-adce1621a8ec-kube-api-access-94nwg\") pod \"barbican-0249-account-create-update-jjdf2\" (UID: \"81194f90-78b9-463c-a83e-adce1621a8ec\") " pod="openstack/barbican-0249-account-create-update-jjdf2" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.012524 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhz6w\" (UniqueName: \"kubernetes.io/projected/37fac279-e557-4505-8a93-d7610f2326f0-kube-api-access-dhz6w\") pod \"cinder-db-create-gzzfl\" (UID: \"37fac279-e557-4505-8a93-d7610f2326f0\") " pod="openstack/cinder-db-create-gzzfl" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.012631 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-config-data\") pod \"keystone-db-sync-qdzqf\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.012664 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84078cf-9bd8-4920-9537-c4d1f863e4b2-operator-scripts\") pod \"heat-f565-account-create-update-hn9mk\" (UID: \"a84078cf-9bd8-4920-9537-c4d1f863e4b2\") " pod="openstack/heat-f565-account-create-update-hn9mk" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.012739 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-combined-ca-bundle\") pod \"keystone-db-sync-qdzqf\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.012953 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81194f90-78b9-463c-a83e-adce1621a8ec-operator-scripts\") pod \"barbican-0249-account-create-update-jjdf2\" (UID: \"81194f90-78b9-463c-a83e-adce1621a8ec\") " pod="openstack/barbican-0249-account-create-update-jjdf2" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.013080 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37fac279-e557-4505-8a93-d7610f2326f0-operator-scripts\") pod \"cinder-db-create-gzzfl\" (UID: \"37fac279-e557-4505-8a93-d7610f2326f0\") " pod="openstack/cinder-db-create-gzzfl" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.038258 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" podStartSLOduration=3.038236554 podStartE2EDuration="3.038236554s" podCreationTimestamp="2026-02-04 11:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:45:40.928026434 +0000 UTC m=+1090.070730819" watchObservedRunningTime="2026-02-04 11:45:41.038236554 +0000 UTC m=+1090.180940939" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.042821 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mg5hg"] Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.054698 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhz6w\" (UniqueName: \"kubernetes.io/projected/37fac279-e557-4505-8a93-d7610f2326f0-kube-api-access-dhz6w\") pod \"cinder-db-create-gzzfl\" (UID: \"37fac279-e557-4505-8a93-d7610f2326f0\") " pod="openstack/cinder-db-create-gzzfl" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.054870 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94nwg\" (UniqueName: \"kubernetes.io/projected/81194f90-78b9-463c-a83e-adce1621a8ec-kube-api-access-94nwg\") pod \"barbican-0249-account-create-update-jjdf2\" (UID: \"81194f90-78b9-463c-a83e-adce1621a8ec\") " pod="openstack/barbican-0249-account-create-update-jjdf2" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.078262 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-m8z6k"] Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.079378 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m8z6k" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.083648 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m8z6k"] Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.093389 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-64d0-account-create-update-xrn2s"] Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.094671 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-64d0-account-create-update-xrn2s" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.101228 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.106426 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gzzfl" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.113930 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-config-data\") pod \"keystone-db-sync-qdzqf\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.113966 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84078cf-9bd8-4920-9537-c4d1f863e4b2-operator-scripts\") pod \"heat-f565-account-create-update-hn9mk\" (UID: \"a84078cf-9bd8-4920-9537-c4d1f863e4b2\") " pod="openstack/heat-f565-account-create-update-hn9mk" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.113990 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54311305-ef02-48b5-a913-4b5c8fa9730b-operator-scripts\") pod \"neutron-db-create-m8z6k\" (UID: \"54311305-ef02-48b5-a913-4b5c8fa9730b\") " pod="openstack/neutron-db-create-m8z6k" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.114022 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-combined-ca-bundle\") pod \"keystone-db-sync-qdzqf\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.114056 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zbw5\" (UniqueName: \"kubernetes.io/projected/a84078cf-9bd8-4920-9537-c4d1f863e4b2-kube-api-access-5zbw5\") pod \"heat-f565-account-create-update-hn9mk\" (UID: \"a84078cf-9bd8-4920-9537-c4d1f863e4b2\") " pod="openstack/heat-f565-account-create-update-hn9mk" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.114088 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlvqn\" (UniqueName: \"kubernetes.io/projected/d8279c44-c9f5-40f7-a933-bf7b91f30750-kube-api-access-hlvqn\") pod \"keystone-db-sync-qdzqf\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.114105 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6f6\" (UniqueName: \"kubernetes.io/projected/54311305-ef02-48b5-a913-4b5c8fa9730b-kube-api-access-wf6f6\") pod \"neutron-db-create-m8z6k\" (UID: \"54311305-ef02-48b5-a913-4b5c8fa9730b\") " pod="openstack/neutron-db-create-m8z6k" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.114129 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9cdbe1-394e-4100-ad1c-53851f9955fb-operator-scripts\") pod \"barbican-db-create-mg5hg\" (UID: \"1c9cdbe1-394e-4100-ad1c-53851f9955fb\") " pod="openstack/barbican-db-create-mg5hg" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.114322 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjwv2\" (UniqueName: \"kubernetes.io/projected/1c9cdbe1-394e-4100-ad1c-53851f9955fb-kube-api-access-qjwv2\") pod \"barbican-db-create-mg5hg\" (UID: \"1c9cdbe1-394e-4100-ad1c-53851f9955fb\") " pod="openstack/barbican-db-create-mg5hg" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.118142 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84078cf-9bd8-4920-9537-c4d1f863e4b2-operator-scripts\") pod \"heat-f565-account-create-update-hn9mk\" (UID: \"a84078cf-9bd8-4920-9537-c4d1f863e4b2\") " pod="openstack/heat-f565-account-create-update-hn9mk" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.119508 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-config-data\") pod \"keystone-db-sync-qdzqf\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.119554 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9cdbe1-394e-4100-ad1c-53851f9955fb-operator-scripts\") pod \"barbican-db-create-mg5hg\" (UID: \"1c9cdbe1-394e-4100-ad1c-53851f9955fb\") " pod="openstack/barbican-db-create-mg5hg" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.120608 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-combined-ca-bundle\") pod \"keystone-db-sync-qdzqf\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.131803 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0249-account-create-update-jjdf2" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.139034 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlvqn\" (UniqueName: \"kubernetes.io/projected/d8279c44-c9f5-40f7-a933-bf7b91f30750-kube-api-access-hlvqn\") pod \"keystone-db-sync-qdzqf\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.157911 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjwv2\" (UniqueName: \"kubernetes.io/projected/1c9cdbe1-394e-4100-ad1c-53851f9955fb-kube-api-access-qjwv2\") pod \"barbican-db-create-mg5hg\" (UID: \"1c9cdbe1-394e-4100-ad1c-53851f9955fb\") " pod="openstack/barbican-db-create-mg5hg" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.159500 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zbw5\" (UniqueName: \"kubernetes.io/projected/a84078cf-9bd8-4920-9537-c4d1f863e4b2-kube-api-access-5zbw5\") pod \"heat-f565-account-create-update-hn9mk\" (UID: \"a84078cf-9bd8-4920-9537-c4d1f863e4b2\") " pod="openstack/heat-f565-account-create-update-hn9mk" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.159577 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-64d0-account-create-update-xrn2s"] Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.174418 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mg5hg" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.190856 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.215354 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24zps\" (UniqueName: \"kubernetes.io/projected/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-kube-api-access-24zps\") pod \"cinder-64d0-account-create-update-xrn2s\" (UID: \"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a\") " pod="openstack/cinder-64d0-account-create-update-xrn2s" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.215442 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54311305-ef02-48b5-a913-4b5c8fa9730b-operator-scripts\") pod \"neutron-db-create-m8z6k\" (UID: \"54311305-ef02-48b5-a913-4b5c8fa9730b\") " pod="openstack/neutron-db-create-m8z6k" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.215498 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-operator-scripts\") pod \"cinder-64d0-account-create-update-xrn2s\" (UID: \"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a\") " pod="openstack/cinder-64d0-account-create-update-xrn2s" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.215522 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6f6\" (UniqueName: \"kubernetes.io/projected/54311305-ef02-48b5-a913-4b5c8fa9730b-kube-api-access-wf6f6\") pod \"neutron-db-create-m8z6k\" (UID: \"54311305-ef02-48b5-a913-4b5c8fa9730b\") " pod="openstack/neutron-db-create-m8z6k" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.216893 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54311305-ef02-48b5-a913-4b5c8fa9730b-operator-scripts\") pod \"neutron-db-create-m8z6k\" (UID: \"54311305-ef02-48b5-a913-4b5c8fa9730b\") " pod="openstack/neutron-db-create-m8z6k" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.247855 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-11c2-account-create-update-kzkht"] Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.249060 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-11c2-account-create-update-kzkht" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.250319 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6f6\" (UniqueName: \"kubernetes.io/projected/54311305-ef02-48b5-a913-4b5c8fa9730b-kube-api-access-wf6f6\") pod \"neutron-db-create-m8z6k\" (UID: \"54311305-ef02-48b5-a913-4b5c8fa9730b\") " pod="openstack/neutron-db-create-m8z6k" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.250770 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.255185 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-11c2-account-create-update-kzkht"] Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.282258 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f565-account-create-update-hn9mk" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.319064 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-operator-scripts\") pod \"cinder-64d0-account-create-update-xrn2s\" (UID: \"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a\") " pod="openstack/cinder-64d0-account-create-update-xrn2s" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.319190 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6psvm\" (UniqueName: \"kubernetes.io/projected/bdfe3ee4-3d03-4e23-b977-b90d512610ab-kube-api-access-6psvm\") pod \"neutron-11c2-account-create-update-kzkht\" (UID: \"bdfe3ee4-3d03-4e23-b977-b90d512610ab\") " pod="openstack/neutron-11c2-account-create-update-kzkht" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.319236 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24zps\" (UniqueName: \"kubernetes.io/projected/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-kube-api-access-24zps\") pod \"cinder-64d0-account-create-update-xrn2s\" (UID: \"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a\") " pod="openstack/cinder-64d0-account-create-update-xrn2s" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.319267 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdfe3ee4-3d03-4e23-b977-b90d512610ab-operator-scripts\") pod \"neutron-11c2-account-create-update-kzkht\" (UID: \"bdfe3ee4-3d03-4e23-b977-b90d512610ab\") " pod="openstack/neutron-11c2-account-create-update-kzkht" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.321172 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-operator-scripts\") pod \"cinder-64d0-account-create-update-xrn2s\" (UID: \"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a\") " pod="openstack/cinder-64d0-account-create-update-xrn2s" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.359449 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24zps\" (UniqueName: \"kubernetes.io/projected/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-kube-api-access-24zps\") pod \"cinder-64d0-account-create-update-xrn2s\" (UID: \"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a\") " pod="openstack/cinder-64d0-account-create-update-xrn2s" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.423585 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6psvm\" (UniqueName: \"kubernetes.io/projected/bdfe3ee4-3d03-4e23-b977-b90d512610ab-kube-api-access-6psvm\") pod \"neutron-11c2-account-create-update-kzkht\" (UID: \"bdfe3ee4-3d03-4e23-b977-b90d512610ab\") " pod="openstack/neutron-11c2-account-create-update-kzkht" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.423625 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdfe3ee4-3d03-4e23-b977-b90d512610ab-operator-scripts\") pod \"neutron-11c2-account-create-update-kzkht\" (UID: \"bdfe3ee4-3d03-4e23-b977-b90d512610ab\") " pod="openstack/neutron-11c2-account-create-update-kzkht" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.424308 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdfe3ee4-3d03-4e23-b977-b90d512610ab-operator-scripts\") pod \"neutron-11c2-account-create-update-kzkht\" (UID: \"bdfe3ee4-3d03-4e23-b977-b90d512610ab\") " pod="openstack/neutron-11c2-account-create-update-kzkht" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.454127 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6psvm\" (UniqueName: \"kubernetes.io/projected/bdfe3ee4-3d03-4e23-b977-b90d512610ab-kube-api-access-6psvm\") pod \"neutron-11c2-account-create-update-kzkht\" (UID: \"bdfe3ee4-3d03-4e23-b977-b90d512610ab\") " pod="openstack/neutron-11c2-account-create-update-kzkht" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.502643 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m8z6k" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.509410 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-kvjt5"] Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.550435 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-64d0-account-create-update-xrn2s" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.577415 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-11c2-account-create-update-kzkht" Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.907438 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kvjt5" event={"ID":"3528d762-0e78-4914-ae55-f11bb812f322","Type":"ContainerStarted","Data":"0cdfe1066c70c459d82a4eb1f33f71330204369a5684ad299098f7dea82fdd07"} Feb 04 11:45:41 crc kubenswrapper[4728]: I0204 11:45:41.954362 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gzzfl"] Feb 04 11:45:42 crc kubenswrapper[4728]: I0204 11:45:42.000981 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-f565-account-create-update-hn9mk"] Feb 04 11:45:42 crc kubenswrapper[4728]: I0204 11:45:42.008658 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qdzqf"] Feb 04 11:45:42 crc kubenswrapper[4728]: W0204 11:45:42.009114 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda84078cf_9bd8_4920_9537_c4d1f863e4b2.slice/crio-2f5091ab3093dda54d10d250ca8517829d9e1033b4dd53e4ab67e38277ad5732 WatchSource:0}: Error finding container 2f5091ab3093dda54d10d250ca8517829d9e1033b4dd53e4ab67e38277ad5732: Status 404 returned error can't find the container with id 2f5091ab3093dda54d10d250ca8517829d9e1033b4dd53e4ab67e38277ad5732 Feb 04 11:45:42 crc kubenswrapper[4728]: I0204 11:45:42.029193 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mg5hg"] Feb 04 11:45:42 crc kubenswrapper[4728]: I0204 11:45:42.236636 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0249-account-create-update-jjdf2"] Feb 04 11:45:42 crc kubenswrapper[4728]: I0204 11:45:42.306846 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m8z6k"] Feb 04 11:45:42 crc kubenswrapper[4728]: I0204 11:45:42.330845 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-11c2-account-create-update-kzkht"] Feb 04 11:45:42 crc kubenswrapper[4728]: I0204 11:45:42.358159 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-64d0-account-create-update-xrn2s"] Feb 04 11:45:42 crc kubenswrapper[4728]: I0204 11:45:42.917179 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f565-account-create-update-hn9mk" event={"ID":"a84078cf-9bd8-4920-9537-c4d1f863e4b2","Type":"ContainerStarted","Data":"2f5091ab3093dda54d10d250ca8517829d9e1033b4dd53e4ab67e38277ad5732"} Feb 04 11:45:42 crc kubenswrapper[4728]: I0204 11:45:42.919016 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gzzfl" event={"ID":"37fac279-e557-4505-8a93-d7610f2326f0","Type":"ContainerStarted","Data":"b720b59c6469d079a6c550f2be0fe63177af06f029143a69ad3ece1e7a8c1d2e"} Feb 04 11:45:45 crc kubenswrapper[4728]: W0204 11:45:45.819791 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8279c44_c9f5_40f7_a933_bf7b91f30750.slice/crio-4a694e8c54cc3edde0a0b3a188e54b463735b32c0d7cea0b73446ea3dc9b4fc1 WatchSource:0}: Error finding container 4a694e8c54cc3edde0a0b3a188e54b463735b32c0d7cea0b73446ea3dc9b4fc1: Status 404 returned error can't find the container with id 4a694e8c54cc3edde0a0b3a188e54b463735b32c0d7cea0b73446ea3dc9b4fc1 Feb 04 11:45:45 crc kubenswrapper[4728]: W0204 11:45:45.821991 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c9cdbe1_394e_4100_ad1c_53851f9955fb.slice/crio-893ac47b764e8034f5a32ddd31d04326ea79ae82f6bf5aa1d769ef7d572f5e7d WatchSource:0}: Error finding container 893ac47b764e8034f5a32ddd31d04326ea79ae82f6bf5aa1d769ef7d572f5e7d: Status 404 returned error can't find the container with id 893ac47b764e8034f5a32ddd31d04326ea79ae82f6bf5aa1d769ef7d572f5e7d Feb 04 11:45:45 crc kubenswrapper[4728]: W0204 11:45:45.822710 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81194f90_78b9_463c_a83e_adce1621a8ec.slice/crio-e672cf66514a6bfefda82cf11a9659b51442123f9a27d706e02c3498ec14d2e6 WatchSource:0}: Error finding container e672cf66514a6bfefda82cf11a9659b51442123f9a27d706e02c3498ec14d2e6: Status 404 returned error can't find the container with id e672cf66514a6bfefda82cf11a9659b51442123f9a27d706e02c3498ec14d2e6 Feb 04 11:45:45 crc kubenswrapper[4728]: W0204 11:45:45.826492 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54311305_ef02_48b5_a913_4b5c8fa9730b.slice/crio-11e8c8e0e312713130b45769aa16fa2ce88acffe4a380b1262dcc81be6ebfb91 WatchSource:0}: Error finding container 11e8c8e0e312713130b45769aa16fa2ce88acffe4a380b1262dcc81be6ebfb91: Status 404 returned error can't find the container with id 11e8c8e0e312713130b45769aa16fa2ce88acffe4a380b1262dcc81be6ebfb91 Feb 04 11:45:45 crc kubenswrapper[4728]: W0204 11:45:45.849005 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ebaecfd_261e_41a3_be7b_9e21a9b7a10a.slice/crio-b5fd4fe59b4ed869ac9992c49f93da4233f8a6bed8b7e57c9b378e6e21bfbbbc WatchSource:0}: Error finding container b5fd4fe59b4ed869ac9992c49f93da4233f8a6bed8b7e57c9b378e6e21bfbbbc: Status 404 returned error can't find the container with id b5fd4fe59b4ed869ac9992c49f93da4233f8a6bed8b7e57c9b378e6e21bfbbbc Feb 04 11:45:45 crc kubenswrapper[4728]: I0204 11:45:45.945217 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0249-account-create-update-jjdf2" event={"ID":"81194f90-78b9-463c-a83e-adce1621a8ec","Type":"ContainerStarted","Data":"e672cf66514a6bfefda82cf11a9659b51442123f9a27d706e02c3498ec14d2e6"} Feb 04 11:45:45 crc kubenswrapper[4728]: I0204 11:45:45.946297 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qdzqf" event={"ID":"d8279c44-c9f5-40f7-a933-bf7b91f30750","Type":"ContainerStarted","Data":"4a694e8c54cc3edde0a0b3a188e54b463735b32c0d7cea0b73446ea3dc9b4fc1"} Feb 04 11:45:45 crc kubenswrapper[4728]: I0204 11:45:45.947303 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mg5hg" event={"ID":"1c9cdbe1-394e-4100-ad1c-53851f9955fb","Type":"ContainerStarted","Data":"893ac47b764e8034f5a32ddd31d04326ea79ae82f6bf5aa1d769ef7d572f5e7d"} Feb 04 11:45:45 crc kubenswrapper[4728]: I0204 11:45:45.948349 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-11c2-account-create-update-kzkht" event={"ID":"bdfe3ee4-3d03-4e23-b977-b90d512610ab","Type":"ContainerStarted","Data":"b9440cd35df144b651d80f9604352f5d050bd2681d825093e536ea3119c29fa4"} Feb 04 11:45:45 crc kubenswrapper[4728]: I0204 11:45:45.950266 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m8z6k" event={"ID":"54311305-ef02-48b5-a913-4b5c8fa9730b","Type":"ContainerStarted","Data":"11e8c8e0e312713130b45769aa16fa2ce88acffe4a380b1262dcc81be6ebfb91"} Feb 04 11:45:45 crc kubenswrapper[4728]: I0204 11:45:45.951589 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-64d0-account-create-update-xrn2s" event={"ID":"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a","Type":"ContainerStarted","Data":"b5fd4fe59b4ed869ac9992c49f93da4233f8a6bed8b7e57c9b378e6e21bfbbbc"} Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.959043 4728 generic.go:334] "Generic (PLEG): container finished" podID="a84078cf-9bd8-4920-9537-c4d1f863e4b2" containerID="9df5a4154fa2b461b55b57b3c380076491c19628ebc6a5f5b12f41fc0eddf522" exitCode=0 Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.959092 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f565-account-create-update-hn9mk" event={"ID":"a84078cf-9bd8-4920-9537-c4d1f863e4b2","Type":"ContainerDied","Data":"9df5a4154fa2b461b55b57b3c380076491c19628ebc6a5f5b12f41fc0eddf522"} Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.961416 4728 generic.go:334] "Generic (PLEG): container finished" podID="bdfe3ee4-3d03-4e23-b977-b90d512610ab" containerID="31ea12d26a9507e72ec3a47653329e35eb6dedbddea80008a07362458153c0a4" exitCode=0 Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.961480 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-11c2-account-create-update-kzkht" event={"ID":"bdfe3ee4-3d03-4e23-b977-b90d512610ab","Type":"ContainerDied","Data":"31ea12d26a9507e72ec3a47653329e35eb6dedbddea80008a07362458153c0a4"} Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.964345 4728 generic.go:334] "Generic (PLEG): container finished" podID="54311305-ef02-48b5-a913-4b5c8fa9730b" containerID="674ea414f42b45dfd5f58fa6f85a9d498b791146600682fc5fbdc2205ab77ff8" exitCode=0 Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.964404 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m8z6k" event={"ID":"54311305-ef02-48b5-a913-4b5c8fa9730b","Type":"ContainerDied","Data":"674ea414f42b45dfd5f58fa6f85a9d498b791146600682fc5fbdc2205ab77ff8"} Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.980398 4728 generic.go:334] "Generic (PLEG): container finished" podID="1ebaecfd-261e-41a3-be7b-9e21a9b7a10a" containerID="fd03290c8c69db507ffea81d50b57b9a2266eae929a90580d2046112af1a45b7" exitCode=0 Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.980517 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-64d0-account-create-update-xrn2s" event={"ID":"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a","Type":"ContainerDied","Data":"fd03290c8c69db507ffea81d50b57b9a2266eae929a90580d2046112af1a45b7"} Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.987089 4728 generic.go:334] "Generic (PLEG): container finished" podID="81194f90-78b9-463c-a83e-adce1621a8ec" containerID="a3f4757b77263aea4d424adc927e527d754e0638a16e59f7998abe8d7fd8d28d" exitCode=0 Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.987202 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0249-account-create-update-jjdf2" event={"ID":"81194f90-78b9-463c-a83e-adce1621a8ec","Type":"ContainerDied","Data":"a3f4757b77263aea4d424adc927e527d754e0638a16e59f7998abe8d7fd8d28d"} Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.990989 4728 generic.go:334] "Generic (PLEG): container finished" podID="37fac279-e557-4505-8a93-d7610f2326f0" containerID="d739f6ccc734ed98ae7e62cf6daada056c75d39d07dba6c295e2d68ecb7a6d52" exitCode=0 Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.991093 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gzzfl" event={"ID":"37fac279-e557-4505-8a93-d7610f2326f0","Type":"ContainerDied","Data":"d739f6ccc734ed98ae7e62cf6daada056c75d39d07dba6c295e2d68ecb7a6d52"} Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.998088 4728 generic.go:334] "Generic (PLEG): container finished" podID="1c9cdbe1-394e-4100-ad1c-53851f9955fb" containerID="f9eab5e7a6ce22cf85a477438cd5a84bb945d38339ec548ca84640f72cc21061" exitCode=0 Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.998146 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mg5hg" event={"ID":"1c9cdbe1-394e-4100-ad1c-53851f9955fb","Type":"ContainerDied","Data":"f9eab5e7a6ce22cf85a477438cd5a84bb945d38339ec548ca84640f72cc21061"} Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.999589 4728 generic.go:334] "Generic (PLEG): container finished" podID="3528d762-0e78-4914-ae55-f11bb812f322" containerID="cf1e0a86d567568bbdcf29fa76aa28480ffa62887364760579f8193851c0116d" exitCode=0 Feb 04 11:45:46 crc kubenswrapper[4728]: I0204 11:45:46.999619 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kvjt5" event={"ID":"3528d762-0e78-4914-ae55-f11bb812f322","Type":"ContainerDied","Data":"cf1e0a86d567568bbdcf29fa76aa28480ffa62887364760579f8193851c0116d"} Feb 04 11:45:48 crc kubenswrapper[4728]: I0204 11:45:48.525937 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:45:48 crc kubenswrapper[4728]: I0204 11:45:48.617651 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hvgvk"] Feb 04 11:45:48 crc kubenswrapper[4728]: I0204 11:45:48.617874 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-hvgvk" podUID="dead90ba-0731-47f7-9252-45d8ddf2dd5a" containerName="dnsmasq-dns" containerID="cri-o://ae1849fd6482bb0d8fb0c3ff1aaeba1d059d16bcb3b6febe3aa178beb91f0bc7" gracePeriod=10 Feb 04 11:45:49 crc kubenswrapper[4728]: I0204 11:45:49.045537 4728 generic.go:334] "Generic (PLEG): container finished" podID="5552fcdf-e47f-47e8-acde-ed2e74f54188" containerID="cac95b3f8edc828472f9a5ecb75eb070d6c5dc25c7500917acb5f54c9dff69c1" exitCode=0 Feb 04 11:45:49 crc kubenswrapper[4728]: I0204 11:45:49.045620 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lwtzn" event={"ID":"5552fcdf-e47f-47e8-acde-ed2e74f54188","Type":"ContainerDied","Data":"cac95b3f8edc828472f9a5ecb75eb070d6c5dc25c7500917acb5f54c9dff69c1"} Feb 04 11:45:49 crc kubenswrapper[4728]: I0204 11:45:49.057016 4728 generic.go:334] "Generic (PLEG): container finished" podID="dead90ba-0731-47f7-9252-45d8ddf2dd5a" containerID="ae1849fd6482bb0d8fb0c3ff1aaeba1d059d16bcb3b6febe3aa178beb91f0bc7" exitCode=0 Feb 04 11:45:49 crc kubenswrapper[4728]: I0204 11:45:49.057066 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hvgvk" event={"ID":"dead90ba-0731-47f7-9252-45d8ddf2dd5a","Type":"ContainerDied","Data":"ae1849fd6482bb0d8fb0c3ff1aaeba1d059d16bcb3b6febe3aa178beb91f0bc7"} Feb 04 11:45:50 crc kubenswrapper[4728]: I0204 11:45:50.607393 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hvgvk" podUID="dead90ba-0731-47f7-9252-45d8ddf2dd5a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Feb 04 11:45:51 crc kubenswrapper[4728]: I0204 11:45:51.969130 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-11c2-account-create-update-kzkht" Feb 04 11:45:51 crc kubenswrapper[4728]: I0204 11:45:51.981070 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kvjt5" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.010241 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f565-account-create-update-hn9mk" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.020049 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdfe3ee4-3d03-4e23-b977-b90d512610ab-operator-scripts\") pod \"bdfe3ee4-3d03-4e23-b977-b90d512610ab\" (UID: \"bdfe3ee4-3d03-4e23-b977-b90d512610ab\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.020149 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz7cv\" (UniqueName: \"kubernetes.io/projected/3528d762-0e78-4914-ae55-f11bb812f322-kube-api-access-bz7cv\") pod \"3528d762-0e78-4914-ae55-f11bb812f322\" (UID: \"3528d762-0e78-4914-ae55-f11bb812f322\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.020228 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6psvm\" (UniqueName: \"kubernetes.io/projected/bdfe3ee4-3d03-4e23-b977-b90d512610ab-kube-api-access-6psvm\") pod \"bdfe3ee4-3d03-4e23-b977-b90d512610ab\" (UID: \"bdfe3ee4-3d03-4e23-b977-b90d512610ab\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.020318 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3528d762-0e78-4914-ae55-f11bb812f322-operator-scripts\") pod \"3528d762-0e78-4914-ae55-f11bb812f322\" (UID: \"3528d762-0e78-4914-ae55-f11bb812f322\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.020793 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdfe3ee4-3d03-4e23-b977-b90d512610ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdfe3ee4-3d03-4e23-b977-b90d512610ab" (UID: "bdfe3ee4-3d03-4e23-b977-b90d512610ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.021088 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3528d762-0e78-4914-ae55-f11bb812f322-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3528d762-0e78-4914-ae55-f11bb812f322" (UID: "3528d762-0e78-4914-ae55-f11bb812f322"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.038900 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfe3ee4-3d03-4e23-b977-b90d512610ab-kube-api-access-6psvm" (OuterVolumeSpecName: "kube-api-access-6psvm") pod "bdfe3ee4-3d03-4e23-b977-b90d512610ab" (UID: "bdfe3ee4-3d03-4e23-b977-b90d512610ab"). InnerVolumeSpecName "kube-api-access-6psvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.039443 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3528d762-0e78-4914-ae55-f11bb812f322-kube-api-access-bz7cv" (OuterVolumeSpecName: "kube-api-access-bz7cv") pod "3528d762-0e78-4914-ae55-f11bb812f322" (UID: "3528d762-0e78-4914-ae55-f11bb812f322"). InnerVolumeSpecName "kube-api-access-bz7cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.072368 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gzzfl" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.080605 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0249-account-create-update-jjdf2" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.091198 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-64d0-account-create-update-xrn2s" event={"ID":"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a","Type":"ContainerDied","Data":"b5fd4fe59b4ed869ac9992c49f93da4233f8a6bed8b7e57c9b378e6e21bfbbbc"} Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.091564 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5fd4fe59b4ed869ac9992c49f93da4233f8a6bed8b7e57c9b378e6e21bfbbbc" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.091431 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mg5hg" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.093669 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0249-account-create-update-jjdf2" event={"ID":"81194f90-78b9-463c-a83e-adce1621a8ec","Type":"ContainerDied","Data":"e672cf66514a6bfefda82cf11a9659b51442123f9a27d706e02c3498ec14d2e6"} Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.093707 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e672cf66514a6bfefda82cf11a9659b51442123f9a27d706e02c3498ec14d2e6" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.093859 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0249-account-create-update-jjdf2" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.096542 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qdzqf" event={"ID":"d8279c44-c9f5-40f7-a933-bf7b91f30750","Type":"ContainerStarted","Data":"1b0ae7805d043aaaa9d704d909ca5ad0b5f83d471d67241cf83453be403bed11"} Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.111279 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mg5hg" event={"ID":"1c9cdbe1-394e-4100-ad1c-53851f9955fb","Type":"ContainerDied","Data":"893ac47b764e8034f5a32ddd31d04326ea79ae82f6bf5aa1d769ef7d572f5e7d"} Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.111526 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="893ac47b764e8034f5a32ddd31d04326ea79ae82f6bf5aa1d769ef7d572f5e7d" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.111607 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mg5hg" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.117680 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kvjt5" event={"ID":"3528d762-0e78-4914-ae55-f11bb812f322","Type":"ContainerDied","Data":"0cdfe1066c70c459d82a4eb1f33f71330204369a5684ad299098f7dea82fdd07"} Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.117976 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cdfe1066c70c459d82a4eb1f33f71330204369a5684ad299098f7dea82fdd07" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.118127 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kvjt5" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.119190 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-64d0-account-create-update-xrn2s" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.120414 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f565-account-create-update-hn9mk" event={"ID":"a84078cf-9bd8-4920-9537-c4d1f863e4b2","Type":"ContainerDied","Data":"2f5091ab3093dda54d10d250ca8517829d9e1033b4dd53e4ab67e38277ad5732"} Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.120451 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f5091ab3093dda54d10d250ca8517829d9e1033b4dd53e4ab67e38277ad5732" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.120497 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f565-account-create-update-hn9mk" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.121193 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjwv2\" (UniqueName: \"kubernetes.io/projected/1c9cdbe1-394e-4100-ad1c-53851f9955fb-kube-api-access-qjwv2\") pod \"1c9cdbe1-394e-4100-ad1c-53851f9955fb\" (UID: \"1c9cdbe1-394e-4100-ad1c-53851f9955fb\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.121316 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84078cf-9bd8-4920-9537-c4d1f863e4b2-operator-scripts\") pod \"a84078cf-9bd8-4920-9537-c4d1f863e4b2\" (UID: \"a84078cf-9bd8-4920-9537-c4d1f863e4b2\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.121409 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94nwg\" (UniqueName: \"kubernetes.io/projected/81194f90-78b9-463c-a83e-adce1621a8ec-kube-api-access-94nwg\") pod \"81194f90-78b9-463c-a83e-adce1621a8ec\" (UID: \"81194f90-78b9-463c-a83e-adce1621a8ec\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.121453 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37fac279-e557-4505-8a93-d7610f2326f0-operator-scripts\") pod \"37fac279-e557-4505-8a93-d7610f2326f0\" (UID: \"37fac279-e557-4505-8a93-d7610f2326f0\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.121479 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9cdbe1-394e-4100-ad1c-53851f9955fb-operator-scripts\") pod \"1c9cdbe1-394e-4100-ad1c-53851f9955fb\" (UID: \"1c9cdbe1-394e-4100-ad1c-53851f9955fb\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.121515 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zbw5\" (UniqueName: \"kubernetes.io/projected/a84078cf-9bd8-4920-9537-c4d1f863e4b2-kube-api-access-5zbw5\") pod \"a84078cf-9bd8-4920-9537-c4d1f863e4b2\" (UID: \"a84078cf-9bd8-4920-9537-c4d1f863e4b2\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.121543 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhz6w\" (UniqueName: \"kubernetes.io/projected/37fac279-e557-4505-8a93-d7610f2326f0-kube-api-access-dhz6w\") pod \"37fac279-e557-4505-8a93-d7610f2326f0\" (UID: \"37fac279-e557-4505-8a93-d7610f2326f0\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.121559 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81194f90-78b9-463c-a83e-adce1621a8ec-operator-scripts\") pod \"81194f90-78b9-463c-a83e-adce1621a8ec\" (UID: \"81194f90-78b9-463c-a83e-adce1621a8ec\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.121972 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fac279-e557-4505-8a93-d7610f2326f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37fac279-e557-4505-8a93-d7610f2326f0" (UID: "37fac279-e557-4505-8a93-d7610f2326f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.122083 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz7cv\" (UniqueName: \"kubernetes.io/projected/3528d762-0e78-4914-ae55-f11bb812f322-kube-api-access-bz7cv\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.122131 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37fac279-e557-4505-8a93-d7610f2326f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.122146 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6psvm\" (UniqueName: \"kubernetes.io/projected/bdfe3ee4-3d03-4e23-b977-b90d512610ab-kube-api-access-6psvm\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.122160 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3528d762-0e78-4914-ae55-f11bb812f322-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.122169 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdfe3ee4-3d03-4e23-b977-b90d512610ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.123036 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81194f90-78b9-463c-a83e-adce1621a8ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81194f90-78b9-463c-a83e-adce1621a8ec" (UID: "81194f90-78b9-463c-a83e-adce1621a8ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.127571 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9cdbe1-394e-4100-ad1c-53851f9955fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c9cdbe1-394e-4100-ad1c-53851f9955fb" (UID: "1c9cdbe1-394e-4100-ad1c-53851f9955fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.127677 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84078cf-9bd8-4920-9537-c4d1f863e4b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a84078cf-9bd8-4920-9537-c4d1f863e4b2" (UID: "a84078cf-9bd8-4920-9537-c4d1f863e4b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.127689 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9cdbe1-394e-4100-ad1c-53851f9955fb-kube-api-access-qjwv2" (OuterVolumeSpecName: "kube-api-access-qjwv2") pod "1c9cdbe1-394e-4100-ad1c-53851f9955fb" (UID: "1c9cdbe1-394e-4100-ad1c-53851f9955fb"). InnerVolumeSpecName "kube-api-access-qjwv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.127888 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a84078cf-9bd8-4920-9537-c4d1f863e4b2-kube-api-access-5zbw5" (OuterVolumeSpecName: "kube-api-access-5zbw5") pod "a84078cf-9bd8-4920-9537-c4d1f863e4b2" (UID: "a84078cf-9bd8-4920-9537-c4d1f863e4b2"). InnerVolumeSpecName "kube-api-access-5zbw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.131504 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m8z6k" event={"ID":"54311305-ef02-48b5-a913-4b5c8fa9730b","Type":"ContainerDied","Data":"11e8c8e0e312713130b45769aa16fa2ce88acffe4a380b1262dcc81be6ebfb91"} Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.131543 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11e8c8e0e312713130b45769aa16fa2ce88acffe4a380b1262dcc81be6ebfb91" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.133305 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81194f90-78b9-463c-a83e-adce1621a8ec-kube-api-access-94nwg" (OuterVolumeSpecName: "kube-api-access-94nwg") pod "81194f90-78b9-463c-a83e-adce1621a8ec" (UID: "81194f90-78b9-463c-a83e-adce1621a8ec"). InnerVolumeSpecName "kube-api-access-94nwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.133946 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fac279-e557-4505-8a93-d7610f2326f0-kube-api-access-dhz6w" (OuterVolumeSpecName: "kube-api-access-dhz6w") pod "37fac279-e557-4505-8a93-d7610f2326f0" (UID: "37fac279-e557-4505-8a93-d7610f2326f0"). InnerVolumeSpecName "kube-api-access-dhz6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.139661 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m8z6k" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.139868 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gzzfl" event={"ID":"37fac279-e557-4505-8a93-d7610f2326f0","Type":"ContainerDied","Data":"b720b59c6469d079a6c550f2be0fe63177af06f029143a69ad3ece1e7a8c1d2e"} Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.139898 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b720b59c6469d079a6c550f2be0fe63177af06f029143a69ad3ece1e7a8c1d2e" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.139940 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gzzfl" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.140554 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.141887 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lwtzn" event={"ID":"5552fcdf-e47f-47e8-acde-ed2e74f54188","Type":"ContainerDied","Data":"897b10dc5e8366560ad41bbde2b4cd12c13adb61176dc8fd9872d3ec72dce29d"} Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.141921 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="897b10dc5e8366560ad41bbde2b4cd12c13adb61176dc8fd9872d3ec72dce29d" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.143743 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qdzqf" podStartSLOduration=6.181086842 podStartE2EDuration="12.14371852s" podCreationTimestamp="2026-02-04 11:45:40 +0000 UTC" firstStartedPulling="2026-02-04 11:45:45.824080101 +0000 UTC m=+1094.966784486" lastFinishedPulling="2026-02-04 11:45:51.786711779 +0000 UTC m=+1100.929416164" observedRunningTime="2026-02-04 11:45:52.127952332 +0000 UTC m=+1101.270656717" watchObservedRunningTime="2026-02-04 11:45:52.14371852 +0000 UTC m=+1101.286422905" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.144987 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hvgvk" event={"ID":"dead90ba-0731-47f7-9252-45d8ddf2dd5a","Type":"ContainerDied","Data":"5c2b1eff102ad7c7bd8dd03d1871118fed3ebf5fe38844fc6f3f54239f63bc85"} Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.145023 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c2b1eff102ad7c7bd8dd03d1871118fed3ebf5fe38844fc6f3f54239f63bc85" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.157197 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-11c2-account-create-update-kzkht" event={"ID":"bdfe3ee4-3d03-4e23-b977-b90d512610ab","Type":"ContainerDied","Data":"b9440cd35df144b651d80f9604352f5d050bd2681d825093e536ea3119c29fa4"} Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.157232 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9440cd35df144b651d80f9604352f5d050bd2681d825093e536ea3119c29fa4" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.157289 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-11c2-account-create-update-kzkht" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.168712 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.223539 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-operator-scripts\") pod \"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a\" (UID: \"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.223589 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-config-data\") pod \"5552fcdf-e47f-47e8-acde-ed2e74f54188\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.223624 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-sb\") pod \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.223673 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6f6\" (UniqueName: \"kubernetes.io/projected/54311305-ef02-48b5-a913-4b5c8fa9730b-kube-api-access-wf6f6\") pod \"54311305-ef02-48b5-a913-4b5c8fa9730b\" (UID: \"54311305-ef02-48b5-a913-4b5c8fa9730b\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.223762 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54311305-ef02-48b5-a913-4b5c8fa9730b-operator-scripts\") pod \"54311305-ef02-48b5-a913-4b5c8fa9730b\" (UID: \"54311305-ef02-48b5-a913-4b5c8fa9730b\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.223807 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24zps\" (UniqueName: \"kubernetes.io/projected/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-kube-api-access-24zps\") pod \"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a\" (UID: \"1ebaecfd-261e-41a3-be7b-9e21a9b7a10a\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.223839 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-db-sync-config-data\") pod \"5552fcdf-e47f-47e8-acde-ed2e74f54188\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.223865 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5d66\" (UniqueName: \"kubernetes.io/projected/5552fcdf-e47f-47e8-acde-ed2e74f54188-kube-api-access-g5d66\") pod \"5552fcdf-e47f-47e8-acde-ed2e74f54188\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.223896 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qljkl\" (UniqueName: \"kubernetes.io/projected/dead90ba-0731-47f7-9252-45d8ddf2dd5a-kube-api-access-qljkl\") pod \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.223921 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-combined-ca-bundle\") pod \"5552fcdf-e47f-47e8-acde-ed2e74f54188\" (UID: \"5552fcdf-e47f-47e8-acde-ed2e74f54188\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.223995 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-nb\") pod \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.224025 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-config\") pod \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.224068 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-dns-svc\") pod \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\" (UID: \"dead90ba-0731-47f7-9252-45d8ddf2dd5a\") " Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.224435 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjwv2\" (UniqueName: \"kubernetes.io/projected/1c9cdbe1-394e-4100-ad1c-53851f9955fb-kube-api-access-qjwv2\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.224555 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84078cf-9bd8-4920-9537-c4d1f863e4b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.224582 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94nwg\" (UniqueName: \"kubernetes.io/projected/81194f90-78b9-463c-a83e-adce1621a8ec-kube-api-access-94nwg\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.224595 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9cdbe1-394e-4100-ad1c-53851f9955fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.224607 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zbw5\" (UniqueName: \"kubernetes.io/projected/a84078cf-9bd8-4920-9537-c4d1f863e4b2-kube-api-access-5zbw5\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.224618 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhz6w\" (UniqueName: \"kubernetes.io/projected/37fac279-e557-4505-8a93-d7610f2326f0-kube-api-access-dhz6w\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.224630 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81194f90-78b9-463c-a83e-adce1621a8ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.227286 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ebaecfd-261e-41a3-be7b-9e21a9b7a10a" (UID: "1ebaecfd-261e-41a3-be7b-9e21a9b7a10a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.227742 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54311305-ef02-48b5-a913-4b5c8fa9730b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54311305-ef02-48b5-a913-4b5c8fa9730b" (UID: "54311305-ef02-48b5-a913-4b5c8fa9730b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.236926 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5552fcdf-e47f-47e8-acde-ed2e74f54188-kube-api-access-g5d66" (OuterVolumeSpecName: "kube-api-access-g5d66") pod "5552fcdf-e47f-47e8-acde-ed2e74f54188" (UID: "5552fcdf-e47f-47e8-acde-ed2e74f54188"). InnerVolumeSpecName "kube-api-access-g5d66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.238253 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54311305-ef02-48b5-a913-4b5c8fa9730b-kube-api-access-wf6f6" (OuterVolumeSpecName: "kube-api-access-wf6f6") pod "54311305-ef02-48b5-a913-4b5c8fa9730b" (UID: "54311305-ef02-48b5-a913-4b5c8fa9730b"). InnerVolumeSpecName "kube-api-access-wf6f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.240400 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dead90ba-0731-47f7-9252-45d8ddf2dd5a-kube-api-access-qljkl" (OuterVolumeSpecName: "kube-api-access-qljkl") pod "dead90ba-0731-47f7-9252-45d8ddf2dd5a" (UID: "dead90ba-0731-47f7-9252-45d8ddf2dd5a"). InnerVolumeSpecName "kube-api-access-qljkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.240469 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5552fcdf-e47f-47e8-acde-ed2e74f54188" (UID: "5552fcdf-e47f-47e8-acde-ed2e74f54188"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.241651 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-kube-api-access-24zps" (OuterVolumeSpecName: "kube-api-access-24zps") pod "1ebaecfd-261e-41a3-be7b-9e21a9b7a10a" (UID: "1ebaecfd-261e-41a3-be7b-9e21a9b7a10a"). InnerVolumeSpecName "kube-api-access-24zps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.265167 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5552fcdf-e47f-47e8-acde-ed2e74f54188" (UID: "5552fcdf-e47f-47e8-acde-ed2e74f54188"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.324113 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dead90ba-0731-47f7-9252-45d8ddf2dd5a" (UID: "dead90ba-0731-47f7-9252-45d8ddf2dd5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.326589 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.326644 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf6f6\" (UniqueName: \"kubernetes.io/projected/54311305-ef02-48b5-a913-4b5c8fa9730b-kube-api-access-wf6f6\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.326656 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54311305-ef02-48b5-a913-4b5c8fa9730b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.326669 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24zps\" (UniqueName: \"kubernetes.io/projected/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a-kube-api-access-24zps\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.326845 4728 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.326857 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5d66\" (UniqueName: \"kubernetes.io/projected/5552fcdf-e47f-47e8-acde-ed2e74f54188-kube-api-access-g5d66\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.326866 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qljkl\" (UniqueName: \"kubernetes.io/projected/dead90ba-0731-47f7-9252-45d8ddf2dd5a-kube-api-access-qljkl\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.326877 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.326889 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.328118 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dead90ba-0731-47f7-9252-45d8ddf2dd5a" (UID: "dead90ba-0731-47f7-9252-45d8ddf2dd5a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.333598 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-config-data" (OuterVolumeSpecName: "config-data") pod "5552fcdf-e47f-47e8-acde-ed2e74f54188" (UID: "5552fcdf-e47f-47e8-acde-ed2e74f54188"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.333621 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-config" (OuterVolumeSpecName: "config") pod "dead90ba-0731-47f7-9252-45d8ddf2dd5a" (UID: "dead90ba-0731-47f7-9252-45d8ddf2dd5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.337876 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dead90ba-0731-47f7-9252-45d8ddf2dd5a" (UID: "dead90ba-0731-47f7-9252-45d8ddf2dd5a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.429057 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.429086 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.429095 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5552fcdf-e47f-47e8-acde-ed2e74f54188-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:52 crc kubenswrapper[4728]: I0204 11:45:52.429104 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dead90ba-0731-47f7-9252-45d8ddf2dd5a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.164267 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m8z6k" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.164356 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-64d0-account-create-update-xrn2s" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.164398 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hvgvk" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.164433 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lwtzn" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.223157 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hvgvk"] Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.232395 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hvgvk"] Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.571411 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dead90ba-0731-47f7-9252-45d8ddf2dd5a" path="/var/lib/kubelet/pods/dead90ba-0731-47f7-9252-45d8ddf2dd5a/volumes" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604037 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-4k4fv"] Feb 04 11:45:53 crc kubenswrapper[4728]: E0204 11:45:53.604376 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fac279-e557-4505-8a93-d7610f2326f0" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604395 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fac279-e557-4505-8a93-d7610f2326f0" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: E0204 11:45:53.604416 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5552fcdf-e47f-47e8-acde-ed2e74f54188" containerName="glance-db-sync" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604422 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5552fcdf-e47f-47e8-acde-ed2e74f54188" containerName="glance-db-sync" Feb 04 11:45:53 crc kubenswrapper[4728]: E0204 11:45:53.604430 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dead90ba-0731-47f7-9252-45d8ddf2dd5a" containerName="init" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604436 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="dead90ba-0731-47f7-9252-45d8ddf2dd5a" containerName="init" Feb 04 11:45:53 crc kubenswrapper[4728]: E0204 11:45:53.604447 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfe3ee4-3d03-4e23-b977-b90d512610ab" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604453 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfe3ee4-3d03-4e23-b977-b90d512610ab" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: E0204 11:45:53.604468 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81194f90-78b9-463c-a83e-adce1621a8ec" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604474 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="81194f90-78b9-463c-a83e-adce1621a8ec" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: E0204 11:45:53.604486 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9cdbe1-394e-4100-ad1c-53851f9955fb" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604493 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9cdbe1-394e-4100-ad1c-53851f9955fb" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: E0204 11:45:53.604502 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84078cf-9bd8-4920-9537-c4d1f863e4b2" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604508 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84078cf-9bd8-4920-9537-c4d1f863e4b2" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: E0204 11:45:53.604517 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebaecfd-261e-41a3-be7b-9e21a9b7a10a" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604523 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebaecfd-261e-41a3-be7b-9e21a9b7a10a" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: E0204 11:45:53.604632 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54311305-ef02-48b5-a913-4b5c8fa9730b" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604641 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="54311305-ef02-48b5-a913-4b5c8fa9730b" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: E0204 11:45:53.604655 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3528d762-0e78-4914-ae55-f11bb812f322" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604662 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3528d762-0e78-4914-ae55-f11bb812f322" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: E0204 11:45:53.604672 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dead90ba-0731-47f7-9252-45d8ddf2dd5a" containerName="dnsmasq-dns" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.604680 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="dead90ba-0731-47f7-9252-45d8ddf2dd5a" containerName="dnsmasq-dns" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.605455 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="81194f90-78b9-463c-a83e-adce1621a8ec" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.605477 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9cdbe1-394e-4100-ad1c-53851f9955fb" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.605490 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3528d762-0e78-4914-ae55-f11bb812f322" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.605500 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfe3ee4-3d03-4e23-b977-b90d512610ab" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.605510 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="5552fcdf-e47f-47e8-acde-ed2e74f54188" containerName="glance-db-sync" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.605519 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fac279-e557-4505-8a93-d7610f2326f0" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.605527 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="dead90ba-0731-47f7-9252-45d8ddf2dd5a" containerName="dnsmasq-dns" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.605535 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ebaecfd-261e-41a3-be7b-9e21a9b7a10a" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.605544 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a84078cf-9bd8-4920-9537-c4d1f863e4b2" containerName="mariadb-account-create-update" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.605552 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="54311305-ef02-48b5-a913-4b5c8fa9730b" containerName="mariadb-database-create" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.606361 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.627167 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-4k4fv"] Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.751174 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.751502 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.751526 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-config\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.751563 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.751671 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.751728 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59bm\" (UniqueName: \"kubernetes.io/projected/56996edc-8d58-43e9-8c65-9b3b6b08fe10-kube-api-access-k59bm\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.852700 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.852792 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.852835 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59bm\" (UniqueName: \"kubernetes.io/projected/56996edc-8d58-43e9-8c65-9b3b6b08fe10-kube-api-access-k59bm\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.852895 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.852920 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.852939 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-config\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.853698 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-config\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.853731 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.854330 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.854480 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.854625 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.877095 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59bm\" (UniqueName: \"kubernetes.io/projected/56996edc-8d58-43e9-8c65-9b3b6b08fe10-kube-api-access-k59bm\") pod \"dnsmasq-dns-74f6bcbc87-4k4fv\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:53 crc kubenswrapper[4728]: I0204 11:45:53.934510 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:54 crc kubenswrapper[4728]: I0204 11:45:54.389846 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-4k4fv"] Feb 04 11:45:55 crc kubenswrapper[4728]: I0204 11:45:55.183945 4728 generic.go:334] "Generic (PLEG): container finished" podID="56996edc-8d58-43e9-8c65-9b3b6b08fe10" containerID="8fbf25577ac0a922eca161af9e01e59c9d946d5aa1da5780365d14d8be158f46" exitCode=0 Feb 04 11:45:55 crc kubenswrapper[4728]: I0204 11:45:55.184036 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" event={"ID":"56996edc-8d58-43e9-8c65-9b3b6b08fe10","Type":"ContainerDied","Data":"8fbf25577ac0a922eca161af9e01e59c9d946d5aa1da5780365d14d8be158f46"} Feb 04 11:45:55 crc kubenswrapper[4728]: I0204 11:45:55.184310 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" event={"ID":"56996edc-8d58-43e9-8c65-9b3b6b08fe10","Type":"ContainerStarted","Data":"7bcba2a5ee5bc4184074484e5ee236d17db73effd130322fc2a0d6da3b1909dc"} Feb 04 11:45:56 crc kubenswrapper[4728]: I0204 11:45:56.193456 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" event={"ID":"56996edc-8d58-43e9-8c65-9b3b6b08fe10","Type":"ContainerStarted","Data":"9e1532310c0f9fb9aedd929ebd28854cff0b381177df8c64401fa12424b76796"} Feb 04 11:45:56 crc kubenswrapper[4728]: I0204 11:45:56.194568 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:45:56 crc kubenswrapper[4728]: I0204 11:45:56.195100 4728 generic.go:334] "Generic (PLEG): container finished" podID="d8279c44-c9f5-40f7-a933-bf7b91f30750" containerID="1b0ae7805d043aaaa9d704d909ca5ad0b5f83d471d67241cf83453be403bed11" exitCode=0 Feb 04 11:45:56 crc kubenswrapper[4728]: I0204 11:45:56.195149 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qdzqf" event={"ID":"d8279c44-c9f5-40f7-a933-bf7b91f30750","Type":"ContainerDied","Data":"1b0ae7805d043aaaa9d704d909ca5ad0b5f83d471d67241cf83453be403bed11"} Feb 04 11:45:56 crc kubenswrapper[4728]: I0204 11:45:56.217700 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" podStartSLOduration=3.217673634 podStartE2EDuration="3.217673634s" podCreationTimestamp="2026-02-04 11:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:45:56.210126774 +0000 UTC m=+1105.352831159" watchObservedRunningTime="2026-02-04 11:45:56.217673634 +0000 UTC m=+1105.360378029" Feb 04 11:45:57 crc kubenswrapper[4728]: I0204 11:45:57.552393 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:57 crc kubenswrapper[4728]: I0204 11:45:57.715214 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-config-data\") pod \"d8279c44-c9f5-40f7-a933-bf7b91f30750\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " Feb 04 11:45:57 crc kubenswrapper[4728]: I0204 11:45:57.715326 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlvqn\" (UniqueName: \"kubernetes.io/projected/d8279c44-c9f5-40f7-a933-bf7b91f30750-kube-api-access-hlvqn\") pod \"d8279c44-c9f5-40f7-a933-bf7b91f30750\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " Feb 04 11:45:57 crc kubenswrapper[4728]: I0204 11:45:57.715445 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-combined-ca-bundle\") pod \"d8279c44-c9f5-40f7-a933-bf7b91f30750\" (UID: \"d8279c44-c9f5-40f7-a933-bf7b91f30750\") " Feb 04 11:45:57 crc kubenswrapper[4728]: I0204 11:45:57.741008 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8279c44-c9f5-40f7-a933-bf7b91f30750-kube-api-access-hlvqn" (OuterVolumeSpecName: "kube-api-access-hlvqn") pod "d8279c44-c9f5-40f7-a933-bf7b91f30750" (UID: "d8279c44-c9f5-40f7-a933-bf7b91f30750"). InnerVolumeSpecName "kube-api-access-hlvqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:45:57 crc kubenswrapper[4728]: I0204 11:45:57.745102 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8279c44-c9f5-40f7-a933-bf7b91f30750" (UID: "d8279c44-c9f5-40f7-a933-bf7b91f30750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:45:57 crc kubenswrapper[4728]: I0204 11:45:57.773163 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-config-data" (OuterVolumeSpecName: "config-data") pod "d8279c44-c9f5-40f7-a933-bf7b91f30750" (UID: "d8279c44-c9f5-40f7-a933-bf7b91f30750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:45:57 crc kubenswrapper[4728]: I0204 11:45:57.817115 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:57 crc kubenswrapper[4728]: I0204 11:45:57.817156 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8279c44-c9f5-40f7-a933-bf7b91f30750-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:57 crc kubenswrapper[4728]: I0204 11:45:57.817165 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlvqn\" (UniqueName: \"kubernetes.io/projected/d8279c44-c9f5-40f7-a933-bf7b91f30750-kube-api-access-hlvqn\") on node \"crc\" DevicePath \"\"" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.212793 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qdzqf" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.212774 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qdzqf" event={"ID":"d8279c44-c9f5-40f7-a933-bf7b91f30750","Type":"ContainerDied","Data":"4a694e8c54cc3edde0a0b3a188e54b463735b32c0d7cea0b73446ea3dc9b4fc1"} Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.213564 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a694e8c54cc3edde0a0b3a188e54b463735b32c0d7cea0b73446ea3dc9b4fc1" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.542743 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pxrgg"] Feb 04 11:45:58 crc kubenswrapper[4728]: E0204 11:45:58.543188 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8279c44-c9f5-40f7-a933-bf7b91f30750" containerName="keystone-db-sync" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.543213 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8279c44-c9f5-40f7-a933-bf7b91f30750" containerName="keystone-db-sync" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.543466 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8279c44-c9f5-40f7-a933-bf7b91f30750" containerName="keystone-db-sync" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.545157 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.546872 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.547137 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.551112 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.551175 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.564849 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pxrgg"] Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.564962 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-prll6" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.598573 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-4k4fv"] Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.656554 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-d2c7z"] Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.659189 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.680732 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-jx9lp"] Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.681682 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jx9lp" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.692013 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.692265 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-m2l7x" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.731602 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-combined-ca-bundle\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.731673 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-credential-keys\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.731724 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-fernet-keys\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.731740 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-scripts\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.731770 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbhcg\" (UniqueName: \"kubernetes.io/projected/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-kube-api-access-kbhcg\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.738135 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-config-data\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.739254 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-d2c7z"] Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.751888 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jx9lp"] Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.831549 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kj7rj"] Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.833186 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839433 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-config-data\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839471 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839528 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-combined-ca-bundle\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839556 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839575 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-credential-keys\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839618 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-fernet-keys\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839638 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbhcg\" (UniqueName: \"kubernetes.io/projected/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-kube-api-access-kbhcg\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839656 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-scripts\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839673 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839696 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-config-data\") pod \"heat-db-sync-jx9lp\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " pod="openstack/heat-db-sync-jx9lp" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839720 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-config\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839781 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-svc\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839797 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvfw\" (UniqueName: \"kubernetes.io/projected/2cf0743b-3191-4fcc-b059-11df3896a2af-kube-api-access-qqvfw\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839814 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-combined-ca-bundle\") pod \"heat-db-sync-jx9lp\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " pod="openstack/heat-db-sync-jx9lp" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.839827 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95mpp\" (UniqueName: \"kubernetes.io/projected/2186aabd-28ff-488a-a224-01c14710adac-kube-api-access-95mpp\") pod \"heat-db-sync-jx9lp\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " pod="openstack/heat-db-sync-jx9lp" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.846596 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-l5ggl" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.847010 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.847174 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.863159 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-credential-keys\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.863367 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-config-data\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.866438 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-fernet-keys\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.866791 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-scripts\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.867819 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-combined-ca-bundle\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.886204 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kj7rj"] Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.887307 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbhcg\" (UniqueName: \"kubernetes.io/projected/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-kube-api-access-kbhcg\") pod \"keystone-bootstrap-pxrgg\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.888251 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.932523 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kgzbm"] Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.933991 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.951527 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.951790 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.951941 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bhbt8" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.958081 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kgzbm"] Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959046 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959125 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-config\") pod \"neutron-db-sync-kj7rj\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959154 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959190 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-config-data\") pod \"heat-db-sync-jx9lp\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " pod="openstack/heat-db-sync-jx9lp" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959220 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-config\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959255 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-combined-ca-bundle\") pod \"neutron-db-sync-kj7rj\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959294 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-svc\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959316 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvfw\" (UniqueName: \"kubernetes.io/projected/2cf0743b-3191-4fcc-b059-11df3896a2af-kube-api-access-qqvfw\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959342 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6fdb\" (UniqueName: \"kubernetes.io/projected/e3187936-5c4f-4c33-ae73-63309fa067aa-kube-api-access-h6fdb\") pod \"neutron-db-sync-kj7rj\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959364 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-combined-ca-bundle\") pod \"heat-db-sync-jx9lp\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " pod="openstack/heat-db-sync-jx9lp" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959385 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95mpp\" (UniqueName: \"kubernetes.io/projected/2186aabd-28ff-488a-a224-01c14710adac-kube-api-access-95mpp\") pod \"heat-db-sync-jx9lp\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " pod="openstack/heat-db-sync-jx9lp" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.959424 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.960023 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.960292 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.960562 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.961046 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-svc\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.976600 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-config\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.979074 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-combined-ca-bundle\") pod \"heat-db-sync-jx9lp\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " pod="openstack/heat-db-sync-jx9lp" Feb 04 11:45:58 crc kubenswrapper[4728]: I0204 11:45:58.985736 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-config-data\") pod \"heat-db-sync-jx9lp\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " pod="openstack/heat-db-sync-jx9lp" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.006355 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvfw\" (UniqueName: \"kubernetes.io/projected/2cf0743b-3191-4fcc-b059-11df3896a2af-kube-api-access-qqvfw\") pod \"dnsmasq-dns-847c4cc679-d2c7z\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.006738 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.012216 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95mpp\" (UniqueName: \"kubernetes.io/projected/2186aabd-28ff-488a-a224-01c14710adac-kube-api-access-95mpp\") pod \"heat-db-sync-jx9lp\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " pod="openstack/heat-db-sync-jx9lp" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.029111 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.031522 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.032589 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jx9lp" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.034516 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.034739 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.054260 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-d2c7z"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.061855 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6fdb\" (UniqueName: \"kubernetes.io/projected/e3187936-5c4f-4c33-ae73-63309fa067aa-kube-api-access-h6fdb\") pod \"neutron-db-sync-kj7rj\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.061909 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-db-sync-config-data\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.061974 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d949b343-bfde-4d50-81b1-a7c66765c076-etc-machine-id\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.062012 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-scripts\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.062220 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8whl\" (UniqueName: \"kubernetes.io/projected/d949b343-bfde-4d50-81b1-a7c66765c076-kube-api-access-b8whl\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.062241 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-config\") pod \"neutron-db-sync-kj7rj\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.062259 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-combined-ca-bundle\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.062296 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-combined-ca-bundle\") pod \"neutron-db-sync-kj7rj\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.062316 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-config-data\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.069484 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-combined-ca-bundle\") pod \"neutron-db-sync-kj7rj\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.079701 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-config\") pod \"neutron-db-sync-kj7rj\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.104992 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6fdb\" (UniqueName: \"kubernetes.io/projected/e3187936-5c4f-4c33-ae73-63309fa067aa-kube-api-access-h6fdb\") pod \"neutron-db-sync-kj7rj\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.135671 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.156640 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-xrxlt"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.159831 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.161830 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.163960 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.164131 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-g78z8" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166345 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-config-data\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166390 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqbk4\" (UniqueName: \"kubernetes.io/projected/20df29d2-926c-4921-a7f2-eac948556d19-kube-api-access-zqbk4\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166430 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-scripts\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166469 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d949b343-bfde-4d50-81b1-a7c66765c076-etc-machine-id\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166562 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-log-httpd\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166586 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-scripts\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166614 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8whl\" (UniqueName: \"kubernetes.io/projected/d949b343-bfde-4d50-81b1-a7c66765c076-kube-api-access-b8whl\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166657 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-combined-ca-bundle\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166690 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-run-httpd\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166780 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166816 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-config-data\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166889 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-db-sync-config-data\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.166913 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.167046 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d949b343-bfde-4d50-81b1-a7c66765c076-etc-machine-id\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.173075 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xrxlt"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.176408 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-scripts\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.177134 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-config-data\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.182035 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-db-sync-config-data\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.184845 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z7mz4"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.199572 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-combined-ca-bundle\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.200506 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.215808 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8whl\" (UniqueName: \"kubernetes.io/projected/d949b343-bfde-4d50-81b1-a7c66765c076-kube-api-access-b8whl\") pod \"cinder-db-sync-kgzbm\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.223151 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z7mz4"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.234083 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" podUID="56996edc-8d58-43e9-8c65-9b3b6b08fe10" containerName="dnsmasq-dns" containerID="cri-o://9e1532310c0f9fb9aedd929ebd28854cff0b381177df8c64401fa12424b76796" gracePeriod=10 Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.250638 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-nqv8q"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.258118 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.276386 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.276659 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xp6sr" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.281726 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-combined-ca-bundle\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.281838 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-config-data\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.281886 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-config-data\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.281911 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqbk4\" (UniqueName: \"kubernetes.io/projected/20df29d2-926c-4921-a7f2-eac948556d19-kube-api-access-zqbk4\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.281944 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-config\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.281967 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-scripts\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.281993 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvfn\" (UniqueName: \"kubernetes.io/projected/6411322f-ced0-457e-9f0f-61f37755a0b5-kube-api-access-vlvfn\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.282011 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e456f3f-3163-48da-9e88-aaddade811b6-logs\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.282043 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.282075 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.282102 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-log-httpd\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.282140 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-run-httpd\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.282188 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.282964 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.282990 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.283013 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.283036 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-scripts\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.283056 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dmpm\" (UniqueName: \"kubernetes.io/projected/0e456f3f-3163-48da-9e88-aaddade811b6-kube-api-access-7dmpm\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.291924 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-run-httpd\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.295528 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-config-data\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.295781 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-scripts\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.295885 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-log-httpd\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.299305 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.305253 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqbk4\" (UniqueName: \"kubernetes.io/projected/20df29d2-926c-4921-a7f2-eac948556d19-kube-api-access-zqbk4\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.306434 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.314924 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nqv8q"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.351724 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.386790 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.386866 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.386940 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-combined-ca-bundle\") pod \"barbican-db-sync-nqv8q\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387005 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j62df\" (UniqueName: \"kubernetes.io/projected/aad93666-c664-4ded-8970-993c847ac437-kube-api-access-j62df\") pod \"barbican-db-sync-nqv8q\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387078 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387104 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387135 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-scripts\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387157 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dmpm\" (UniqueName: \"kubernetes.io/projected/0e456f3f-3163-48da-9e88-aaddade811b6-kube-api-access-7dmpm\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387203 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-combined-ca-bundle\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387232 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-config-data\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387284 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-config\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387319 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-db-sync-config-data\") pod \"barbican-db-sync-nqv8q\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387348 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlvfn\" (UniqueName: \"kubernetes.io/projected/6411322f-ced0-457e-9f0f-61f37755a0b5-kube-api-access-vlvfn\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387373 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e456f3f-3163-48da-9e88-aaddade811b6-logs\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387790 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e456f3f-3163-48da-9e88-aaddade811b6-logs\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387886 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.387916 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.388503 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.389229 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.389316 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-config\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.391337 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-scripts\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.392329 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-combined-ca-bundle\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.394986 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.396165 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-config-data\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.404916 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlvfn\" (UniqueName: \"kubernetes.io/projected/6411322f-ced0-457e-9f0f-61f37755a0b5-kube-api-access-vlvfn\") pod \"dnsmasq-dns-785d8bcb8c-z7mz4\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.405276 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dmpm\" (UniqueName: \"kubernetes.io/projected/0e456f3f-3163-48da-9e88-aaddade811b6-kube-api-access-7dmpm\") pod \"placement-db-sync-xrxlt\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.447310 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.489064 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j62df\" (UniqueName: \"kubernetes.io/projected/aad93666-c664-4ded-8970-993c847ac437-kube-api-access-j62df\") pod \"barbican-db-sync-nqv8q\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.489223 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-db-sync-config-data\") pod \"barbican-db-sync-nqv8q\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.489348 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-combined-ca-bundle\") pod \"barbican-db-sync-nqv8q\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.495005 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-combined-ca-bundle\") pod \"barbican-db-sync-nqv8q\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.495606 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-db-sync-config-data\") pod \"barbican-db-sync-nqv8q\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.507266 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j62df\" (UniqueName: \"kubernetes.io/projected/aad93666-c664-4ded-8970-993c847ac437-kube-api-access-j62df\") pod \"barbican-db-sync-nqv8q\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.512318 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xrxlt" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.557261 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.672642 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.673041 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.679269 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.684810 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rnmhk" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.685023 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.685186 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.685344 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.713403 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.727324 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-d2c7z"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.741624 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jx9lp"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.793668 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.793983 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.794013 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.794029 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.794048 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-logs\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.794072 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.794091 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.794115 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89bqp\" (UniqueName: \"kubernetes.io/projected/524c4174-6f1e-43c5-9c61-4a22fee4883c-kube-api-access-89bqp\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.871347 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.873374 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.889028 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.889250 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.889335 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.895929 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.895992 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.896021 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.896036 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.896052 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-logs\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.896073 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.896094 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.896116 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89bqp\" (UniqueName: \"kubernetes.io/projected/524c4174-6f1e-43c5-9c61-4a22fee4883c-kube-api-access-89bqp\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.901903 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-logs\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.902309 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.902865 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.907594 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.910057 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.910365 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.910824 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.924597 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89bqp\" (UniqueName: \"kubernetes.io/projected/524c4174-6f1e-43c5-9c61-4a22fee4883c-kube-api-access-89bqp\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.949626 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pxrgg"] Feb 04 11:45:59 crc kubenswrapper[4728]: I0204 11:45:59.958306 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.006012 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.006090 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-scripts\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.006315 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-config-data\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.006392 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.006418 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfzkj\" (UniqueName: \"kubernetes.io/projected/0d92ab9a-551a-40c0-8de4-1347156c89ec-kube-api-access-rfzkj\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.006442 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-logs\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.006462 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.006485 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.015972 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.107994 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-logs\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.108777 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.108918 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.109026 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.108796 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-logs\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.110979 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.111243 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-scripts\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.111382 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-config-data\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.111686 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.112201 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfzkj\" (UniqueName: \"kubernetes.io/projected/0d92ab9a-551a-40c0-8de4-1347156c89ec-kube-api-access-rfzkj\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.114439 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.115315 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.120375 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-scripts\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.123732 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-config-data\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.125526 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.129851 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kj7rj"] Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.155014 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfzkj\" (UniqueName: \"kubernetes.io/projected/0d92ab9a-551a-40c0-8de4-1347156c89ec-kube-api-access-rfzkj\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.156303 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kgzbm"] Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.169383 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.195966 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.250581 4728 generic.go:334] "Generic (PLEG): container finished" podID="56996edc-8d58-43e9-8c65-9b3b6b08fe10" containerID="9e1532310c0f9fb9aedd929ebd28854cff0b381177df8c64401fa12424b76796" exitCode=0 Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.250679 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" event={"ID":"56996edc-8d58-43e9-8c65-9b3b6b08fe10","Type":"ContainerDied","Data":"9e1532310c0f9fb9aedd929ebd28854cff0b381177df8c64401fa12424b76796"} Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.250814 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" event={"ID":"56996edc-8d58-43e9-8c65-9b3b6b08fe10","Type":"ContainerDied","Data":"7bcba2a5ee5bc4184074484e5ee236d17db73effd130322fc2a0d6da3b1909dc"} Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.250847 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcba2a5ee5bc4184074484e5ee236d17db73effd130322fc2a0d6da3b1909dc" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.261339 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jx9lp" event={"ID":"2186aabd-28ff-488a-a224-01c14710adac","Type":"ContainerStarted","Data":"42eb3509d3e4fdf4876c92c6908112f1b90a2bd4db5b299d3577678f46b3d149"} Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.262600 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" event={"ID":"2cf0743b-3191-4fcc-b059-11df3896a2af","Type":"ContainerStarted","Data":"6ae7e31d5d8bac62cbb6f0a174b74394995b6ee11846e250dc93241ae201d167"} Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.262813 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.263625 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kj7rj" event={"ID":"e3187936-5c4f-4c33-ae73-63309fa067aa","Type":"ContainerStarted","Data":"f42a5c9f5affa75020b783e141cb644659d152c528cf6c5c3ff39de0360bfe78"} Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.264511 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxrgg" event={"ID":"a25eb9f6-2a2d-4eb7-b643-c6a354049b47","Type":"ContainerStarted","Data":"670ea8c2557ffe3a52b9f3945c7fe201b249a85d0a8920721e477d91fb644140"} Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.265649 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kgzbm" event={"ID":"d949b343-bfde-4d50-81b1-a7c66765c076","Type":"ContainerStarted","Data":"5e7f5f04af80700c2ae7fb2bae85e6f4a7174d6e5a18ca58acd6760d84744b2c"} Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.416661 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k59bm\" (UniqueName: \"kubernetes.io/projected/56996edc-8d58-43e9-8c65-9b3b6b08fe10-kube-api-access-k59bm\") pod \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.416713 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-nb\") pod \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.416825 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-sb\") pod \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.416859 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-svc\") pod \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.416947 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.418109 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-swift-storage-0\") pod \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.420394 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-config\") pod \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\" (UID: \"56996edc-8d58-43e9-8c65-9b3b6b08fe10\") " Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.478875 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56996edc-8d58-43e9-8c65-9b3b6b08fe10-kube-api-access-k59bm" (OuterVolumeSpecName: "kube-api-access-k59bm") pod "56996edc-8d58-43e9-8c65-9b3b6b08fe10" (UID: "56996edc-8d58-43e9-8c65-9b3b6b08fe10"). InnerVolumeSpecName "kube-api-access-k59bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.532277 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k59bm\" (UniqueName: \"kubernetes.io/projected/56996edc-8d58-43e9-8c65-9b3b6b08fe10-kube-api-access-k59bm\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.744941 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.834826 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xrxlt"] Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.856669 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z7mz4"] Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.856719 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nqv8q"] Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.863460 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:46:00 crc kubenswrapper[4728]: I0204 11:46:00.869923 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:46:00 crc kubenswrapper[4728]: W0204 11:46:00.909019 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20df29d2_926c_4921_a7f2_eac948556d19.slice/crio-9ba66c242b3ce64cfe2b488f2958742e515eece89b81db989e5045e66450729a WatchSource:0}: Error finding container 9ba66c242b3ce64cfe2b488f2958742e515eece89b81db989e5045e66450729a: Status 404 returned error can't find the container with id 9ba66c242b3ce64cfe2b488f2958742e515eece89b81db989e5045e66450729a Feb 04 11:46:01 crc kubenswrapper[4728]: I0204 11:46:01.040477 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56996edc-8d58-43e9-8c65-9b3b6b08fe10" (UID: "56996edc-8d58-43e9-8c65-9b3b6b08fe10"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:01 crc kubenswrapper[4728]: I0204 11:46:01.069900 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:01 crc kubenswrapper[4728]: I0204 11:46:01.095827 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:46:01 crc kubenswrapper[4728]: I0204 11:46:01.109426 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-config" (OuterVolumeSpecName: "config") pod "56996edc-8d58-43e9-8c65-9b3b6b08fe10" (UID: "56996edc-8d58-43e9-8c65-9b3b6b08fe10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:01 crc kubenswrapper[4728]: I0204 11:46:01.125834 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56996edc-8d58-43e9-8c65-9b3b6b08fe10" (UID: "56996edc-8d58-43e9-8c65-9b3b6b08fe10"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.175170 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56996edc-8d58-43e9-8c65-9b3b6b08fe10" (UID: "56996edc-8d58-43e9-8c65-9b3b6b08fe10"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.176821 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.176848 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.176858 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.179024 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56996edc-8d58-43e9-8c65-9b3b6b08fe10" (UID: "56996edc-8d58-43e9-8c65-9b3b6b08fe10"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.283719 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56996edc-8d58-43e9-8c65-9b3b6b08fe10-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.316044 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.331673 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20df29d2-926c-4921-a7f2-eac948556d19","Type":"ContainerStarted","Data":"9ba66c242b3ce64cfe2b488f2958742e515eece89b81db989e5045e66450729a"} Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.333284 4728 generic.go:334] "Generic (PLEG): container finished" podID="2cf0743b-3191-4fcc-b059-11df3896a2af" containerID="0afd798335ed236067f6deff44c2a0b36818609e13b5c23dbcc696cabf19865e" exitCode=0 Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.333352 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" event={"ID":"2cf0743b-3191-4fcc-b059-11df3896a2af","Type":"ContainerDied","Data":"0afd798335ed236067f6deff44c2a0b36818609e13b5c23dbcc696cabf19865e"} Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.337234 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kj7rj" event={"ID":"e3187936-5c4f-4c33-ae73-63309fa067aa","Type":"ContainerStarted","Data":"57a2b04e6bb372fa84d619cb384d7f6c9e8ed4259615c3ecc6a06af4bdcda13c"} Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.338943 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"524c4174-6f1e-43c5-9c61-4a22fee4883c","Type":"ContainerStarted","Data":"5da4ed5293f9d6eb8218a4f97c5f744766b757f6c148f2ec540b44e96170a839"} Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.364548 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxrgg" event={"ID":"a25eb9f6-2a2d-4eb7-b643-c6a354049b47","Type":"ContainerStarted","Data":"7977719a93e758c78f02f7d98ada25b4bccc6d39041bcd036a40072a0b3b5b90"} Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.375051 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" event={"ID":"6411322f-ced0-457e-9f0f-61f37755a0b5","Type":"ContainerStarted","Data":"54efd24bfe81b9751c4c3dc88c29f9e2d60bd6a08ea9311af09ef06b7d306486"} Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.377505 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xrxlt" event={"ID":"0e456f3f-3163-48da-9e88-aaddade811b6","Type":"ContainerStarted","Data":"2ae858dce8f1056379ec0fa5a13dd860dfb001bcf49a88bebdd97e2b8bf745c5"} Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.378828 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-4k4fv" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.382063 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nqv8q" event={"ID":"aad93666-c664-4ded-8970-993c847ac437","Type":"ContainerStarted","Data":"e0f597f4edf900812cd38c682abea6f28b79238a417dcd67dde7b06129d9662c"} Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.405957 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kj7rj" podStartSLOduration=3.405936738 podStartE2EDuration="3.405936738s" podCreationTimestamp="2026-02-04 11:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:01.367927629 +0000 UTC m=+1110.510632014" watchObservedRunningTime="2026-02-04 11:46:01.405936738 +0000 UTC m=+1110.548641123" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.411249 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pxrgg" podStartSLOduration=3.411234245 podStartE2EDuration="3.411234245s" podCreationTimestamp="2026-02-04 11:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:01.384869664 +0000 UTC m=+1110.527574049" watchObservedRunningTime="2026-02-04 11:46:01.411234245 +0000 UTC m=+1110.553938640" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.442369 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-4k4fv"] Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.456258 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-4k4fv"] Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.574195 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56996edc-8d58-43e9-8c65-9b3b6b08fe10" path="/var/lib/kubelet/pods/56996edc-8d58-43e9-8c65-9b3b6b08fe10/volumes" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.833947 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.898897 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-config\") pod \"2cf0743b-3191-4fcc-b059-11df3896a2af\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.899051 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-svc\") pod \"2cf0743b-3191-4fcc-b059-11df3896a2af\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.899077 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-nb\") pod \"2cf0743b-3191-4fcc-b059-11df3896a2af\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.899117 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqvfw\" (UniqueName: \"kubernetes.io/projected/2cf0743b-3191-4fcc-b059-11df3896a2af-kube-api-access-qqvfw\") pod \"2cf0743b-3191-4fcc-b059-11df3896a2af\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.899142 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-swift-storage-0\") pod \"2cf0743b-3191-4fcc-b059-11df3896a2af\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.899177 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-sb\") pod \"2cf0743b-3191-4fcc-b059-11df3896a2af\" (UID: \"2cf0743b-3191-4fcc-b059-11df3896a2af\") " Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.909174 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf0743b-3191-4fcc-b059-11df3896a2af-kube-api-access-qqvfw" (OuterVolumeSpecName: "kube-api-access-qqvfw") pod "2cf0743b-3191-4fcc-b059-11df3896a2af" (UID: "2cf0743b-3191-4fcc-b059-11df3896a2af"). InnerVolumeSpecName "kube-api-access-qqvfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.935814 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2cf0743b-3191-4fcc-b059-11df3896a2af" (UID: "2cf0743b-3191-4fcc-b059-11df3896a2af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.940619 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2cf0743b-3191-4fcc-b059-11df3896a2af" (UID: "2cf0743b-3191-4fcc-b059-11df3896a2af"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.947102 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-config" (OuterVolumeSpecName: "config") pod "2cf0743b-3191-4fcc-b059-11df3896a2af" (UID: "2cf0743b-3191-4fcc-b059-11df3896a2af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:01.960589 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2cf0743b-3191-4fcc-b059-11df3896a2af" (UID: "2cf0743b-3191-4fcc-b059-11df3896a2af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.002879 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.002913 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.002927 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqvfw\" (UniqueName: \"kubernetes.io/projected/2cf0743b-3191-4fcc-b059-11df3896a2af-kube-api-access-qqvfw\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.002943 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.002954 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.045968 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2cf0743b-3191-4fcc-b059-11df3896a2af" (UID: "2cf0743b-3191-4fcc-b059-11df3896a2af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.104020 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cf0743b-3191-4fcc-b059-11df3896a2af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.455745 4728 generic.go:334] "Generic (PLEG): container finished" podID="6411322f-ced0-457e-9f0f-61f37755a0b5" containerID="e74586d56fe3fc4c449831ae1afd16fc145e8dedb33d99b738a7a3ae40b89d08" exitCode=0 Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.456110 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" event={"ID":"6411322f-ced0-457e-9f0f-61f37755a0b5","Type":"ContainerDied","Data":"e74586d56fe3fc4c449831ae1afd16fc145e8dedb33d99b738a7a3ae40b89d08"} Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.463129 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" event={"ID":"2cf0743b-3191-4fcc-b059-11df3896a2af","Type":"ContainerDied","Data":"6ae7e31d5d8bac62cbb6f0a174b74394995b6ee11846e250dc93241ae201d167"} Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.463177 4728 scope.go:117] "RemoveContainer" containerID="0afd798335ed236067f6deff44c2a0b36818609e13b5c23dbcc696cabf19865e" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.463306 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-d2c7z" Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.470850 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d92ab9a-551a-40c0-8de4-1347156c89ec","Type":"ContainerStarted","Data":"151d80e65a1c78b17cdb149b1eeebae66963a39b4d33934b975c94b55f766c5d"} Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.573167 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-d2c7z"] Feb 04 11:46:02 crc kubenswrapper[4728]: I0204 11:46:02.590157 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-d2c7z"] Feb 04 11:46:03 crc kubenswrapper[4728]: I0204 11:46:03.540027 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"524c4174-6f1e-43c5-9c61-4a22fee4883c","Type":"ContainerStarted","Data":"d927e81bdac3eae88723b2f499ebf74789ab96548dc4dc4ae85b430926ee6f7c"} Feb 04 11:46:03 crc kubenswrapper[4728]: I0204 11:46:03.586209 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf0743b-3191-4fcc-b059-11df3896a2af" path="/var/lib/kubelet/pods/2cf0743b-3191-4fcc-b059-11df3896a2af/volumes" Feb 04 11:46:03 crc kubenswrapper[4728]: I0204 11:46:03.586830 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d92ab9a-551a-40c0-8de4-1347156c89ec","Type":"ContainerStarted","Data":"a5741dd37cf2cbc9897e9ac8f0c28e166d311fca3c625c5da0423c4fa021d8d1"} Feb 04 11:46:04 crc kubenswrapper[4728]: I0204 11:46:04.586246 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" event={"ID":"6411322f-ced0-457e-9f0f-61f37755a0b5","Type":"ContainerStarted","Data":"acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563"} Feb 04 11:46:04 crc kubenswrapper[4728]: I0204 11:46:04.586591 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:46:04 crc kubenswrapper[4728]: I0204 11:46:04.593011 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"524c4174-6f1e-43c5-9c61-4a22fee4883c","Type":"ContainerStarted","Data":"289cbda8ea6a4c7297f403a901164493dcb757fb3f75a037ed41f2b23ecd4732"} Feb 04 11:46:04 crc kubenswrapper[4728]: I0204 11:46:04.593203 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="524c4174-6f1e-43c5-9c61-4a22fee4883c" containerName="glance-log" containerID="cri-o://d927e81bdac3eae88723b2f499ebf74789ab96548dc4dc4ae85b430926ee6f7c" gracePeriod=30 Feb 04 11:46:04 crc kubenswrapper[4728]: I0204 11:46:04.593311 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="524c4174-6f1e-43c5-9c61-4a22fee4883c" containerName="glance-httpd" containerID="cri-o://289cbda8ea6a4c7297f403a901164493dcb757fb3f75a037ed41f2b23ecd4732" gracePeriod=30 Feb 04 11:46:04 crc kubenswrapper[4728]: I0204 11:46:04.597369 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d92ab9a-551a-40c0-8de4-1347156c89ec","Type":"ContainerStarted","Data":"981a0c720fc800086f5435de6054954b63664254b1e8836561f94e507c88ffa0"} Feb 04 11:46:04 crc kubenswrapper[4728]: I0204 11:46:04.597503 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0d92ab9a-551a-40c0-8de4-1347156c89ec" containerName="glance-log" containerID="cri-o://a5741dd37cf2cbc9897e9ac8f0c28e166d311fca3c625c5da0423c4fa021d8d1" gracePeriod=30 Feb 04 11:46:04 crc kubenswrapper[4728]: I0204 11:46:04.597605 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0d92ab9a-551a-40c0-8de4-1347156c89ec" containerName="glance-httpd" containerID="cri-o://981a0c720fc800086f5435de6054954b63664254b1e8836561f94e507c88ffa0" gracePeriod=30 Feb 04 11:46:04 crc kubenswrapper[4728]: I0204 11:46:04.645808 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" podStartSLOduration=5.645787058 podStartE2EDuration="5.645787058s" podCreationTimestamp="2026-02-04 11:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:04.609356396 +0000 UTC m=+1113.752060801" watchObservedRunningTime="2026-02-04 11:46:04.645787058 +0000 UTC m=+1113.788491453" Feb 04 11:46:04 crc kubenswrapper[4728]: I0204 11:46:04.662561 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.662539448 podStartE2EDuration="6.662539448s" podCreationTimestamp="2026-02-04 11:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:04.660181882 +0000 UTC m=+1113.802886277" watchObservedRunningTime="2026-02-04 11:46:04.662539448 +0000 UTC m=+1113.805243833" Feb 04 11:46:04 crc kubenswrapper[4728]: I0204 11:46:04.664576 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.664560156 podStartE2EDuration="6.664560156s" podCreationTimestamp="2026-02-04 11:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:04.635053771 +0000 UTC m=+1113.777758156" watchObservedRunningTime="2026-02-04 11:46:04.664560156 +0000 UTC m=+1113.807264541" Feb 04 11:46:05 crc kubenswrapper[4728]: I0204 11:46:05.449139 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:46:05 crc kubenswrapper[4728]: I0204 11:46:05.449496 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:46:05 crc kubenswrapper[4728]: I0204 11:46:05.630346 4728 generic.go:334] "Generic (PLEG): container finished" podID="524c4174-6f1e-43c5-9c61-4a22fee4883c" containerID="289cbda8ea6a4c7297f403a901164493dcb757fb3f75a037ed41f2b23ecd4732" exitCode=0 Feb 04 11:46:05 crc kubenswrapper[4728]: I0204 11:46:05.630409 4728 generic.go:334] "Generic (PLEG): container finished" podID="524c4174-6f1e-43c5-9c61-4a22fee4883c" containerID="d927e81bdac3eae88723b2f499ebf74789ab96548dc4dc4ae85b430926ee6f7c" exitCode=143 Feb 04 11:46:05 crc kubenswrapper[4728]: I0204 11:46:05.630413 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"524c4174-6f1e-43c5-9c61-4a22fee4883c","Type":"ContainerDied","Data":"289cbda8ea6a4c7297f403a901164493dcb757fb3f75a037ed41f2b23ecd4732"} Feb 04 11:46:05 crc kubenswrapper[4728]: I0204 11:46:05.630466 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"524c4174-6f1e-43c5-9c61-4a22fee4883c","Type":"ContainerDied","Data":"d927e81bdac3eae88723b2f499ebf74789ab96548dc4dc4ae85b430926ee6f7c"} Feb 04 11:46:05 crc kubenswrapper[4728]: I0204 11:46:05.634880 4728 generic.go:334] "Generic (PLEG): container finished" podID="0d92ab9a-551a-40c0-8de4-1347156c89ec" containerID="981a0c720fc800086f5435de6054954b63664254b1e8836561f94e507c88ffa0" exitCode=0 Feb 04 11:46:05 crc kubenswrapper[4728]: I0204 11:46:05.634912 4728 generic.go:334] "Generic (PLEG): container finished" podID="0d92ab9a-551a-40c0-8de4-1347156c89ec" containerID="a5741dd37cf2cbc9897e9ac8f0c28e166d311fca3c625c5da0423c4fa021d8d1" exitCode=143 Feb 04 11:46:05 crc kubenswrapper[4728]: I0204 11:46:05.634974 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d92ab9a-551a-40c0-8de4-1347156c89ec","Type":"ContainerDied","Data":"981a0c720fc800086f5435de6054954b63664254b1e8836561f94e507c88ffa0"} Feb 04 11:46:05 crc kubenswrapper[4728]: I0204 11:46:05.635008 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d92ab9a-551a-40c0-8de4-1347156c89ec","Type":"ContainerDied","Data":"a5741dd37cf2cbc9897e9ac8f0c28e166d311fca3c625c5da0423c4fa021d8d1"} Feb 04 11:46:06 crc kubenswrapper[4728]: I0204 11:46:06.646059 4728 generic.go:334] "Generic (PLEG): container finished" podID="a25eb9f6-2a2d-4eb7-b643-c6a354049b47" containerID="7977719a93e758c78f02f7d98ada25b4bccc6d39041bcd036a40072a0b3b5b90" exitCode=0 Feb 04 11:46:06 crc kubenswrapper[4728]: I0204 11:46:06.646148 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxrgg" event={"ID":"a25eb9f6-2a2d-4eb7-b643-c6a354049b47","Type":"ContainerDied","Data":"7977719a93e758c78f02f7d98ada25b4bccc6d39041bcd036a40072a0b3b5b90"} Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.664723 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"524c4174-6f1e-43c5-9c61-4a22fee4883c","Type":"ContainerDied","Data":"5da4ed5293f9d6eb8218a4f97c5f744766b757f6c148f2ec540b44e96170a839"} Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.665031 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5da4ed5293f9d6eb8218a4f97c5f744766b757f6c148f2ec540b44e96170a839" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.667488 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxrgg" event={"ID":"a25eb9f6-2a2d-4eb7-b643-c6a354049b47","Type":"ContainerDied","Data":"670ea8c2557ffe3a52b9f3945c7fe201b249a85d0a8920721e477d91fb644140"} Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.667529 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="670ea8c2557ffe3a52b9f3945c7fe201b249a85d0a8920721e477d91fb644140" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.759104 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.769429 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.786949 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.856028 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-config-data\") pod \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.856131 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-scripts\") pod \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.856204 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-combined-ca-bundle\") pod \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.856311 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbhcg\" (UniqueName: \"kubernetes.io/projected/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-kube-api-access-kbhcg\") pod \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.856365 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-credential-keys\") pod \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.856402 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-fernet-keys\") pod \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\" (UID: \"a25eb9f6-2a2d-4eb7-b643-c6a354049b47\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.872671 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a25eb9f6-2a2d-4eb7-b643-c6a354049b47" (UID: "a25eb9f6-2a2d-4eb7-b643-c6a354049b47"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.879864 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-scripts" (OuterVolumeSpecName: "scripts") pod "a25eb9f6-2a2d-4eb7-b643-c6a354049b47" (UID: "a25eb9f6-2a2d-4eb7-b643-c6a354049b47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.885460 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a25eb9f6-2a2d-4eb7-b643-c6a354049b47" (UID: "a25eb9f6-2a2d-4eb7-b643-c6a354049b47"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.886069 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-kube-api-access-kbhcg" (OuterVolumeSpecName: "kube-api-access-kbhcg") pod "a25eb9f6-2a2d-4eb7-b643-c6a354049b47" (UID: "a25eb9f6-2a2d-4eb7-b643-c6a354049b47"). InnerVolumeSpecName "kube-api-access-kbhcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.905732 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a25eb9f6-2a2d-4eb7-b643-c6a354049b47" (UID: "a25eb9f6-2a2d-4eb7-b643-c6a354049b47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.919772 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-config-data" (OuterVolumeSpecName: "config-data") pod "a25eb9f6-2a2d-4eb7-b643-c6a354049b47" (UID: "a25eb9f6-2a2d-4eb7-b643-c6a354049b47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958554 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-httpd-run\") pod \"524c4174-6f1e-43c5-9c61-4a22fee4883c\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958646 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-combined-ca-bundle\") pod \"524c4174-6f1e-43c5-9c61-4a22fee4883c\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958678 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-scripts\") pod \"524c4174-6f1e-43c5-9c61-4a22fee4883c\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958779 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-scripts\") pod \"0d92ab9a-551a-40c0-8de4-1347156c89ec\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958822 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-config-data\") pod \"0d92ab9a-551a-40c0-8de4-1347156c89ec\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958838 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-config-data\") pod \"524c4174-6f1e-43c5-9c61-4a22fee4883c\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958857 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"524c4174-6f1e-43c5-9c61-4a22fee4883c\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958876 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-internal-tls-certs\") pod \"524c4174-6f1e-43c5-9c61-4a22fee4883c\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958901 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-combined-ca-bundle\") pod \"0d92ab9a-551a-40c0-8de4-1347156c89ec\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958918 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"0d92ab9a-551a-40c0-8de4-1347156c89ec\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958946 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfzkj\" (UniqueName: \"kubernetes.io/projected/0d92ab9a-551a-40c0-8de4-1347156c89ec-kube-api-access-rfzkj\") pod \"0d92ab9a-551a-40c0-8de4-1347156c89ec\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958969 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-logs\") pod \"524c4174-6f1e-43c5-9c61-4a22fee4883c\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.958989 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89bqp\" (UniqueName: \"kubernetes.io/projected/524c4174-6f1e-43c5-9c61-4a22fee4883c-kube-api-access-89bqp\") pod \"524c4174-6f1e-43c5-9c61-4a22fee4883c\" (UID: \"524c4174-6f1e-43c5-9c61-4a22fee4883c\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959006 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-public-tls-certs\") pod \"0d92ab9a-551a-40c0-8de4-1347156c89ec\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959027 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-logs\") pod \"0d92ab9a-551a-40c0-8de4-1347156c89ec\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959057 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-httpd-run\") pod \"0d92ab9a-551a-40c0-8de4-1347156c89ec\" (UID: \"0d92ab9a-551a-40c0-8de4-1347156c89ec\") " Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959049 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "524c4174-6f1e-43c5-9c61-4a22fee4883c" (UID: "524c4174-6f1e-43c5-9c61-4a22fee4883c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959244 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-logs" (OuterVolumeSpecName: "logs") pod "524c4174-6f1e-43c5-9c61-4a22fee4883c" (UID: "524c4174-6f1e-43c5-9c61-4a22fee4883c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959627 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959673 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbhcg\" (UniqueName: \"kubernetes.io/projected/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-kube-api-access-kbhcg\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959685 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959695 4728 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959704 4728 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959712 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/524c4174-6f1e-43c5-9c61-4a22fee4883c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959721 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959729 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25eb9f6-2a2d-4eb7-b643-c6a354049b47-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.959724 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0d92ab9a-551a-40c0-8de4-1347156c89ec" (UID: "0d92ab9a-551a-40c0-8de4-1347156c89ec"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.960790 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-logs" (OuterVolumeSpecName: "logs") pod "0d92ab9a-551a-40c0-8de4-1347156c89ec" (UID: "0d92ab9a-551a-40c0-8de4-1347156c89ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.963138 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524c4174-6f1e-43c5-9c61-4a22fee4883c-kube-api-access-89bqp" (OuterVolumeSpecName: "kube-api-access-89bqp") pod "524c4174-6f1e-43c5-9c61-4a22fee4883c" (UID: "524c4174-6f1e-43c5-9c61-4a22fee4883c"). InnerVolumeSpecName "kube-api-access-89bqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.965512 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-scripts" (OuterVolumeSpecName: "scripts") pod "0d92ab9a-551a-40c0-8de4-1347156c89ec" (UID: "0d92ab9a-551a-40c0-8de4-1347156c89ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.965538 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d92ab9a-551a-40c0-8de4-1347156c89ec-kube-api-access-rfzkj" (OuterVolumeSpecName: "kube-api-access-rfzkj") pod "0d92ab9a-551a-40c0-8de4-1347156c89ec" (UID: "0d92ab9a-551a-40c0-8de4-1347156c89ec"). InnerVolumeSpecName "kube-api-access-rfzkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.974145 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "524c4174-6f1e-43c5-9c61-4a22fee4883c" (UID: "524c4174-6f1e-43c5-9c61-4a22fee4883c"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.975096 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-scripts" (OuterVolumeSpecName: "scripts") pod "524c4174-6f1e-43c5-9c61-4a22fee4883c" (UID: "524c4174-6f1e-43c5-9c61-4a22fee4883c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.981721 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "0d92ab9a-551a-40c0-8de4-1347156c89ec" (UID: "0d92ab9a-551a-40c0-8de4-1347156c89ec"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 11:46:08 crc kubenswrapper[4728]: I0204 11:46:08.997153 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d92ab9a-551a-40c0-8de4-1347156c89ec" (UID: "0d92ab9a-551a-40c0-8de4-1347156c89ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:08.999619 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "524c4174-6f1e-43c5-9c61-4a22fee4883c" (UID: "524c4174-6f1e-43c5-9c61-4a22fee4883c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.021922 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-config-data" (OuterVolumeSpecName: "config-data") pod "0d92ab9a-551a-40c0-8de4-1347156c89ec" (UID: "0d92ab9a-551a-40c0-8de4-1347156c89ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.030459 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0d92ab9a-551a-40c0-8de4-1347156c89ec" (UID: "0d92ab9a-551a-40c0-8de4-1347156c89ec"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.030910 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "524c4174-6f1e-43c5-9c61-4a22fee4883c" (UID: "524c4174-6f1e-43c5-9c61-4a22fee4883c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.037121 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-config-data" (OuterVolumeSpecName: "config-data") pod "524c4174-6f1e-43c5-9c61-4a22fee4883c" (UID: "524c4174-6f1e-43c5-9c61-4a22fee4883c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.060896 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.060933 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.060941 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.060951 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.060958 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.060988 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.060996 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/524c4174-6f1e-43c5-9c61-4a22fee4883c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.061005 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.061018 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.061027 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfzkj\" (UniqueName: \"kubernetes.io/projected/0d92ab9a-551a-40c0-8de4-1347156c89ec-kube-api-access-rfzkj\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.061038 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89bqp\" (UniqueName: \"kubernetes.io/projected/524c4174-6f1e-43c5-9c61-4a22fee4883c-kube-api-access-89bqp\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.061048 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d92ab9a-551a-40c0-8de4-1347156c89ec-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.061061 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.061070 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0d92ab9a-551a-40c0-8de4-1347156c89ec-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.083181 4728 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.083969 4728 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.163129 4728 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.163167 4728 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.567973 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.656979 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-w8kwc"] Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.657561 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" podUID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" containerName="dnsmasq-dns" containerID="cri-o://c3292f2c90f002aae96298b626c233aa790957652ad7a006bda81763bf464698" gracePeriod=10 Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.707655 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.707742 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.707642 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0d92ab9a-551a-40c0-8de4-1347156c89ec","Type":"ContainerDied","Data":"151d80e65a1c78b17cdb149b1eeebae66963a39b4d33934b975c94b55f766c5d"} Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.708254 4728 scope.go:117] "RemoveContainer" containerID="981a0c720fc800086f5435de6054954b63664254b1e8836561f94e507c88ffa0" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.708420 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxrgg" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.811659 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.833120 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.844930 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.852441 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.900795 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:46:09 crc kubenswrapper[4728]: E0204 11:46:09.901169 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25eb9f6-2a2d-4eb7-b643-c6a354049b47" containerName="keystone-bootstrap" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901189 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25eb9f6-2a2d-4eb7-b643-c6a354049b47" containerName="keystone-bootstrap" Feb 04 11:46:09 crc kubenswrapper[4728]: E0204 11:46:09.901220 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56996edc-8d58-43e9-8c65-9b3b6b08fe10" containerName="init" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901227 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="56996edc-8d58-43e9-8c65-9b3b6b08fe10" containerName="init" Feb 04 11:46:09 crc kubenswrapper[4728]: E0204 11:46:09.901238 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524c4174-6f1e-43c5-9c61-4a22fee4883c" containerName="glance-log" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901246 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="524c4174-6f1e-43c5-9c61-4a22fee4883c" containerName="glance-log" Feb 04 11:46:09 crc kubenswrapper[4728]: E0204 11:46:09.901256 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524c4174-6f1e-43c5-9c61-4a22fee4883c" containerName="glance-httpd" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901263 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="524c4174-6f1e-43c5-9c61-4a22fee4883c" containerName="glance-httpd" Feb 04 11:46:09 crc kubenswrapper[4728]: E0204 11:46:09.901278 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56996edc-8d58-43e9-8c65-9b3b6b08fe10" containerName="dnsmasq-dns" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901285 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="56996edc-8d58-43e9-8c65-9b3b6b08fe10" containerName="dnsmasq-dns" Feb 04 11:46:09 crc kubenswrapper[4728]: E0204 11:46:09.901309 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d92ab9a-551a-40c0-8de4-1347156c89ec" containerName="glance-log" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901318 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d92ab9a-551a-40c0-8de4-1347156c89ec" containerName="glance-log" Feb 04 11:46:09 crc kubenswrapper[4728]: E0204 11:46:09.901329 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d92ab9a-551a-40c0-8de4-1347156c89ec" containerName="glance-httpd" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901334 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d92ab9a-551a-40c0-8de4-1347156c89ec" containerName="glance-httpd" Feb 04 11:46:09 crc kubenswrapper[4728]: E0204 11:46:09.901344 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf0743b-3191-4fcc-b059-11df3896a2af" containerName="init" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901350 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf0743b-3191-4fcc-b059-11df3896a2af" containerName="init" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901509 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d92ab9a-551a-40c0-8de4-1347156c89ec" containerName="glance-log" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901518 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf0743b-3191-4fcc-b059-11df3896a2af" containerName="init" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901530 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d92ab9a-551a-40c0-8de4-1347156c89ec" containerName="glance-httpd" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901543 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="56996edc-8d58-43e9-8c65-9b3b6b08fe10" containerName="dnsmasq-dns" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901552 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="524c4174-6f1e-43c5-9c61-4a22fee4883c" containerName="glance-log" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901563 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25eb9f6-2a2d-4eb7-b643-c6a354049b47" containerName="keystone-bootstrap" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.901574 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="524c4174-6f1e-43c5-9c61-4a22fee4883c" containerName="glance-httpd" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.902397 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.907940 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rnmhk" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.908215 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.908386 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.908505 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.909115 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.910590 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.913434 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.913482 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.920433 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.940684 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:46:09 crc kubenswrapper[4728]: I0204 11:46:09.986259 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pxrgg"] Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.012947 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pxrgg"] Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.037995 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-z5s6t"] Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.039110 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.044336 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.044676 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.044862 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-prll6" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.045313 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.050389 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.053174 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-z5s6t"] Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.086808 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.086865 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.086901 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.086999 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.087064 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-logs\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.087096 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.087156 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.087180 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.087255 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjmq\" (UniqueName: \"kubernetes.io/projected/fd18bbb4-d813-4688-ad80-574154978db4-kube-api-access-zgjmq\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.087297 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.087317 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.087696 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.087893 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.088072 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6r7b\" (UniqueName: \"kubernetes.io/projected/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-kube-api-access-z6r7b\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.088146 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.088542 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190159 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6r7b\" (UniqueName: \"kubernetes.io/projected/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-kube-api-access-z6r7b\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190216 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190268 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190305 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l25pn\" (UniqueName: \"kubernetes.io/projected/e8339046-9234-4489-b308-c592a7afa3b0-kube-api-access-l25pn\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190343 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190370 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190398 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-fernet-keys\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190421 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190444 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-config-data\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190469 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190501 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-logs\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190525 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190564 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190583 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190615 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-scripts\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190637 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgjmq\" (UniqueName: \"kubernetes.io/projected/fd18bbb4-d813-4688-ad80-574154978db4-kube-api-access-zgjmq\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190657 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-combined-ca-bundle\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190696 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-credential-keys\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190723 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190762 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190791 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.190816 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.192152 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-logs\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.192430 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.192612 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.192820 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.194115 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.195522 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.196114 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.199641 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.206949 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.208323 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.276084 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.276969 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.278467 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.289779 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgjmq\" (UniqueName: \"kubernetes.io/projected/fd18bbb4-d813-4688-ad80-574154978db4-kube-api-access-zgjmq\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.289883 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6r7b\" (UniqueName: \"kubernetes.io/projected/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-kube-api-access-z6r7b\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.293087 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.294706 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-scripts\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.294734 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-combined-ca-bundle\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.294775 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-credential-keys\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.294867 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l25pn\" (UniqueName: \"kubernetes.io/projected/e8339046-9234-4489-b308-c592a7afa3b0-kube-api-access-l25pn\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.294909 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-fernet-keys\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.294933 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-config-data\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.306985 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-combined-ca-bundle\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.316253 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-fernet-keys\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.318461 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-config-data\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.336314 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-scripts\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.339289 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-credential-keys\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.347780 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l25pn\" (UniqueName: \"kubernetes.io/projected/e8339046-9234-4489-b308-c592a7afa3b0-kube-api-access-l25pn\") pod \"keystone-bootstrap-z5s6t\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.382855 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.394618 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.404520 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.526700 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.542493 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.730354 4728 generic.go:334] "Generic (PLEG): container finished" podID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" containerID="c3292f2c90f002aae96298b626c233aa790957652ad7a006bda81763bf464698" exitCode=0 Feb 04 11:46:10 crc kubenswrapper[4728]: I0204 11:46:10.730405 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" event={"ID":"58be142e-d49a-44fb-81e4-dc38bc4ea3d1","Type":"ContainerDied","Data":"c3292f2c90f002aae96298b626c233aa790957652ad7a006bda81763bf464698"} Feb 04 11:46:11 crc kubenswrapper[4728]: I0204 11:46:11.571974 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d92ab9a-551a-40c0-8de4-1347156c89ec" path="/var/lib/kubelet/pods/0d92ab9a-551a-40c0-8de4-1347156c89ec/volumes" Feb 04 11:46:11 crc kubenswrapper[4728]: I0204 11:46:11.572995 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="524c4174-6f1e-43c5-9c61-4a22fee4883c" path="/var/lib/kubelet/pods/524c4174-6f1e-43c5-9c61-4a22fee4883c/volumes" Feb 04 11:46:11 crc kubenswrapper[4728]: I0204 11:46:11.573961 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25eb9f6-2a2d-4eb7-b643-c6a354049b47" path="/var/lib/kubelet/pods/a25eb9f6-2a2d-4eb7-b643-c6a354049b47/volumes" Feb 04 11:46:13 crc kubenswrapper[4728]: I0204 11:46:13.525082 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" podUID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 04 11:46:18 crc kubenswrapper[4728]: I0204 11:46:18.524743 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" podUID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 04 11:46:23 crc kubenswrapper[4728]: I0204 11:46:23.525634 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" podUID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 04 11:46:23 crc kubenswrapper[4728]: I0204 11:46:23.526404 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:46:24 crc kubenswrapper[4728]: I0204 11:46:24.839854 4728 generic.go:334] "Generic (PLEG): container finished" podID="e3187936-5c4f-4c33-ae73-63309fa067aa" containerID="57a2b04e6bb372fa84d619cb384d7f6c9e8ed4259615c3ecc6a06af4bdcda13c" exitCode=0 Feb 04 11:46:24 crc kubenswrapper[4728]: I0204 11:46:24.839940 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kj7rj" event={"ID":"e3187936-5c4f-4c33-ae73-63309fa067aa","Type":"ContainerDied","Data":"57a2b04e6bb372fa84d619cb384d7f6c9e8ed4259615c3ecc6a06af4bdcda13c"} Feb 04 11:46:26 crc kubenswrapper[4728]: E0204 11:46:26.139849 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 04 11:46:26 crc kubenswrapper[4728]: E0204 11:46:26.140309 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8whl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kgzbm_openstack(d949b343-bfde-4d50-81b1-a7c66765c076): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 11:46:26 crc kubenswrapper[4728]: E0204 11:46:26.141827 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kgzbm" podUID="d949b343-bfde-4d50-81b1-a7c66765c076" Feb 04 11:46:26 crc kubenswrapper[4728]: E0204 11:46:26.586660 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 04 11:46:26 crc kubenswrapper[4728]: E0204 11:46:26.586832 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j62df,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-nqv8q_openstack(aad93666-c664-4ded-8970-993c847ac437): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 11:46:26 crc kubenswrapper[4728]: E0204 11:46:26.588044 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-nqv8q" podUID="aad93666-c664-4ded-8970-993c847ac437" Feb 04 11:46:26 crc kubenswrapper[4728]: E0204 11:46:26.858351 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-kgzbm" podUID="d949b343-bfde-4d50-81b1-a7c66765c076" Feb 04 11:46:26 crc kubenswrapper[4728]: E0204 11:46:26.859524 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-nqv8q" podUID="aad93666-c664-4ded-8970-993c847ac437" Feb 04 11:46:26 crc kubenswrapper[4728]: E0204 11:46:26.962710 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 04 11:46:26 crc kubenswrapper[4728]: E0204 11:46:26.963087 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n696h5d7hb8hd6h594h548h9dh84h5cfhfbh5b9h55bh649h676h5f9h85h5bch57ch5bbh84h54hbfh686h5ch5c9hc4h7fh5h75hd7hbch5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqbk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(20df29d2-926c-4921-a7f2-eac948556d19): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 11:46:27 crc kubenswrapper[4728]: E0204 11:46:27.203625 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 04 11:46:27 crc kubenswrapper[4728]: E0204 11:46:27.203840 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95mpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jx9lp_openstack(2186aabd-28ff-488a-a224-01c14710adac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 04 11:46:27 crc kubenswrapper[4728]: E0204 11:46:27.205128 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-jx9lp" podUID="2186aabd-28ff-488a-a224-01c14710adac" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.240070 4728 scope.go:117] "RemoveContainer" containerID="a5741dd37cf2cbc9897e9ac8f0c28e166d311fca3c625c5da0423c4fa021d8d1" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.412908 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.419386 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.628830 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-config\") pod \"e3187936-5c4f-4c33-ae73-63309fa067aa\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.628921 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8s8w\" (UniqueName: \"kubernetes.io/projected/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-kube-api-access-p8s8w\") pod \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.628952 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6fdb\" (UniqueName: \"kubernetes.io/projected/e3187936-5c4f-4c33-ae73-63309fa067aa-kube-api-access-h6fdb\") pod \"e3187936-5c4f-4c33-ae73-63309fa067aa\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.629002 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-config\") pod \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.629049 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-nb\") pod \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.629130 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-swift-storage-0\") pod \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.629154 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-svc\") pod \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.629230 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-sb\") pod \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\" (UID: \"58be142e-d49a-44fb-81e4-dc38bc4ea3d1\") " Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.629289 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-combined-ca-bundle\") pod \"e3187936-5c4f-4c33-ae73-63309fa067aa\" (UID: \"e3187936-5c4f-4c33-ae73-63309fa067aa\") " Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.636387 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3187936-5c4f-4c33-ae73-63309fa067aa-kube-api-access-h6fdb" (OuterVolumeSpecName: "kube-api-access-h6fdb") pod "e3187936-5c4f-4c33-ae73-63309fa067aa" (UID: "e3187936-5c4f-4c33-ae73-63309fa067aa"). InnerVolumeSpecName "kube-api-access-h6fdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.652336 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-kube-api-access-p8s8w" (OuterVolumeSpecName: "kube-api-access-p8s8w") pod "58be142e-d49a-44fb-81e4-dc38bc4ea3d1" (UID: "58be142e-d49a-44fb-81e4-dc38bc4ea3d1"). InnerVolumeSpecName "kube-api-access-p8s8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.686939 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "58be142e-d49a-44fb-81e4-dc38bc4ea3d1" (UID: "58be142e-d49a-44fb-81e4-dc38bc4ea3d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.694387 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-config" (OuterVolumeSpecName: "config") pod "58be142e-d49a-44fb-81e4-dc38bc4ea3d1" (UID: "58be142e-d49a-44fb-81e4-dc38bc4ea3d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.697935 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-config" (OuterVolumeSpecName: "config") pod "e3187936-5c4f-4c33-ae73-63309fa067aa" (UID: "e3187936-5c4f-4c33-ae73-63309fa067aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.700698 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "58be142e-d49a-44fb-81e4-dc38bc4ea3d1" (UID: "58be142e-d49a-44fb-81e4-dc38bc4ea3d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.702079 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "58be142e-d49a-44fb-81e4-dc38bc4ea3d1" (UID: "58be142e-d49a-44fb-81e4-dc38bc4ea3d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.719935 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3187936-5c4f-4c33-ae73-63309fa067aa" (UID: "e3187936-5c4f-4c33-ae73-63309fa067aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.729397 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "58be142e-d49a-44fb-81e4-dc38bc4ea3d1" (UID: "58be142e-d49a-44fb-81e4-dc38bc4ea3d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.731835 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8s8w\" (UniqueName: \"kubernetes.io/projected/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-kube-api-access-p8s8w\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.731871 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6fdb\" (UniqueName: \"kubernetes.io/projected/e3187936-5c4f-4c33-ae73-63309fa067aa-kube-api-access-h6fdb\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.731884 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.733225 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.733238 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.733251 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.733262 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58be142e-d49a-44fb-81e4-dc38bc4ea3d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.733272 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.733283 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3187936-5c4f-4c33-ae73-63309fa067aa-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.752353 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-z5s6t"] Feb 04 11:46:27 crc kubenswrapper[4728]: W0204 11:46:27.757181 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8339046_9234_4489_b308_c592a7afa3b0.slice/crio-5439e3f5eea8f6a1d7113c103df8b62a696a451b0585786fef502354c513c932 WatchSource:0}: Error finding container 5439e3f5eea8f6a1d7113c103df8b62a696a451b0585786fef502354c513c932: Status 404 returned error can't find the container with id 5439e3f5eea8f6a1d7113c103df8b62a696a451b0585786fef502354c513c932 Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.791149 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.878698 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.881171 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd18bbb4-d813-4688-ad80-574154978db4","Type":"ContainerStarted","Data":"651cd5061b3f4590899b7ccfe7b0228753d1c80c090b4e5bc5075c4db381db1f"} Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.884408 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z5s6t" event={"ID":"e8339046-9234-4489-b308-c592a7afa3b0","Type":"ContainerStarted","Data":"5439e3f5eea8f6a1d7113c103df8b62a696a451b0585786fef502354c513c932"} Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.887383 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xrxlt" event={"ID":"0e456f3f-3163-48da-9e88-aaddade811b6","Type":"ContainerStarted","Data":"fa26f10b54d72ad9d06108aa3070a2337f7c5d879ba5987b4395ea43542f7cc1"} Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.890074 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" event={"ID":"58be142e-d49a-44fb-81e4-dc38bc4ea3d1","Type":"ContainerDied","Data":"b2b2e20ed103b3fac8d24b4c53af9801003ddb4a66a008068858b9c36b0ed592"} Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.890122 4728 scope.go:117] "RemoveContainer" containerID="c3292f2c90f002aae96298b626c233aa790957652ad7a006bda81763bf464698" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.890224 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-w8kwc" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.907047 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kj7rj" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.907041 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kj7rj" event={"ID":"e3187936-5c4f-4c33-ae73-63309fa067aa","Type":"ContainerDied","Data":"f42a5c9f5affa75020b783e141cb644659d152c528cf6c5c3ff39de0360bfe78"} Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.907183 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42a5c9f5affa75020b783e141cb644659d152c528cf6c5c3ff39de0360bfe78" Feb 04 11:46:27 crc kubenswrapper[4728]: E0204 11:46:27.911803 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-jx9lp" podUID="2186aabd-28ff-488a-a224-01c14710adac" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.919488 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-xrxlt" podStartSLOduration=3.510876207 podStartE2EDuration="29.919385771s" podCreationTimestamp="2026-02-04 11:45:58 +0000 UTC" firstStartedPulling="2026-02-04 11:46:00.835113291 +0000 UTC m=+1109.977817676" lastFinishedPulling="2026-02-04 11:46:27.243622845 +0000 UTC m=+1136.386327240" observedRunningTime="2026-02-04 11:46:27.902706102 +0000 UTC m=+1137.045410497" watchObservedRunningTime="2026-02-04 11:46:27.919385771 +0000 UTC m=+1137.062090166" Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.935380 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-w8kwc"] Feb 04 11:46:27 crc kubenswrapper[4728]: I0204 11:46:27.946372 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-w8kwc"] Feb 04 11:46:28 crc kubenswrapper[4728]: W0204 11:46:28.168197 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea257ab3_f8f8_4546_a1e1_f5af6b1d857d.slice/crio-9913d1a0dd20ae9df333e9fca2b423373adb1f2e2f5d81f9b8aab772cb4f07b8 WatchSource:0}: Error finding container 9913d1a0dd20ae9df333e9fca2b423373adb1f2e2f5d81f9b8aab772cb4f07b8: Status 404 returned error can't find the container with id 9913d1a0dd20ae9df333e9fca2b423373adb1f2e2f5d81f9b8aab772cb4f07b8 Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.175498 4728 scope.go:117] "RemoveContainer" containerID="b6c4c83e0215046df1192e7145020a38d03ad4da0cd913d08e6399c3558d03d7" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.724123 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9pjrv"] Feb 04 11:46:28 crc kubenswrapper[4728]: E0204 11:46:28.725175 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3187936-5c4f-4c33-ae73-63309fa067aa" containerName="neutron-db-sync" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.725196 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3187936-5c4f-4c33-ae73-63309fa067aa" containerName="neutron-db-sync" Feb 04 11:46:28 crc kubenswrapper[4728]: E0204 11:46:28.725212 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" containerName="dnsmasq-dns" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.725220 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" containerName="dnsmasq-dns" Feb 04 11:46:28 crc kubenswrapper[4728]: E0204 11:46:28.725238 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" containerName="init" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.725245 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" containerName="init" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.725446 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" containerName="dnsmasq-dns" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.725466 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3187936-5c4f-4c33-ae73-63309fa067aa" containerName="neutron-db-sync" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.726509 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.733447 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9pjrv"] Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.872140 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-config\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.872304 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.872379 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.872418 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.872490 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bv2h\" (UniqueName: \"kubernetes.io/projected/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-kube-api-access-2bv2h\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.872521 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.929724 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7cd5549f4d-zhk8w"] Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.931217 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.933416 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.933571 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-l5ggl" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.933707 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.941382 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.944405 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z5s6t" event={"ID":"e8339046-9234-4489-b308-c592a7afa3b0","Type":"ContainerStarted","Data":"b6aed950b1defebc26df71094e7110da169177e75b9abd57da391ef1a323cfea"} Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.948410 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20df29d2-926c-4921-a7f2-eac948556d19","Type":"ContainerStarted","Data":"43a4bb62c8185cd8f14c958b8c82a3c1c6d6b5e4155022dae5323ea4c9d7ae26"} Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.959209 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d","Type":"ContainerStarted","Data":"006412edd580c08a1199a125e3d71dba305327732d47a37b5785b686f562634b"} Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.959269 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d","Type":"ContainerStarted","Data":"9913d1a0dd20ae9df333e9fca2b423373adb1f2e2f5d81f9b8aab772cb4f07b8"} Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.961994 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd18bbb4-d813-4688-ad80-574154978db4","Type":"ContainerStarted","Data":"6d040dba62774b2756aa4d5ffd2d481fcb23e48f2773d7846d6ec0893a05317a"} Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.975877 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bv2h\" (UniqueName: \"kubernetes.io/projected/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-kube-api-access-2bv2h\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.975916 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.975962 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-config\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.976010 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.976088 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.976121 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.977757 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.983569 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.984132 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-config\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.986952 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cd5549f4d-zhk8w"] Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.989910 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:28 crc kubenswrapper[4728]: I0204 11:46:28.999568 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.002969 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bv2h\" (UniqueName: \"kubernetes.io/projected/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-kube-api-access-2bv2h\") pod \"dnsmasq-dns-55f844cf75-9pjrv\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.032793 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-z5s6t" podStartSLOduration=19.032775678 podStartE2EDuration="19.032775678s" podCreationTimestamp="2026-02-04 11:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:29.02196344 +0000 UTC m=+1138.164667825" watchObservedRunningTime="2026-02-04 11:46:29.032775678 +0000 UTC m=+1138.175480063" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.069352 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.087061 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-combined-ca-bundle\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.087186 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-httpd-config\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.087281 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-ovndb-tls-certs\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.087295 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5d4d\" (UniqueName: \"kubernetes.io/projected/beab3e33-962c-46f9-ac60-a8a739d86cac-kube-api-access-d5d4d\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.087313 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-config\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.189116 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-httpd-config\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.189422 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5d4d\" (UniqueName: \"kubernetes.io/projected/beab3e33-962c-46f9-ac60-a8a739d86cac-kube-api-access-d5d4d\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.189441 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-ovndb-tls-certs\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.189458 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-config\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.189488 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-combined-ca-bundle\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.200467 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-httpd-config\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.201633 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-ovndb-tls-certs\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.201918 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-combined-ca-bundle\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.207268 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5d4d\" (UniqueName: \"kubernetes.io/projected/beab3e33-962c-46f9-ac60-a8a739d86cac-kube-api-access-d5d4d\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.216184 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-config\") pod \"neutron-7cd5549f4d-zhk8w\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.446323 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.566284 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58be142e-d49a-44fb-81e4-dc38bc4ea3d1" path="/var/lib/kubelet/pods/58be142e-d49a-44fb-81e4-dc38bc4ea3d1/volumes" Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.630462 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9pjrv"] Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.986523 4728 generic.go:334] "Generic (PLEG): container finished" podID="0e456f3f-3163-48da-9e88-aaddade811b6" containerID="fa26f10b54d72ad9d06108aa3070a2337f7c5d879ba5987b4395ea43542f7cc1" exitCode=0 Feb 04 11:46:29 crc kubenswrapper[4728]: I0204 11:46:29.986911 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xrxlt" event={"ID":"0e456f3f-3163-48da-9e88-aaddade811b6","Type":"ContainerDied","Data":"fa26f10b54d72ad9d06108aa3070a2337f7c5d879ba5987b4395ea43542f7cc1"} Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.004372 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d","Type":"ContainerStarted","Data":"304804edb116bd86772664ca52ea88307a40cbbf866b97b0dffa97e6fecbd3ea"} Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.011728 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd18bbb4-d813-4688-ad80-574154978db4","Type":"ContainerStarted","Data":"29d18c750afd6b4c043a6698b36057d2034dc163a11f7d6fe752bef2b3b52f80"} Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.014293 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" event={"ID":"248d137a-3361-4df7-b7b6-c26cfb6c5a6d","Type":"ContainerStarted","Data":"75606e0925d72e1a953cb2ac5771226f3919f91bafaf7498742f8465c1aab130"} Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.040306 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.040280882 podStartE2EDuration="21.040280882s" podCreationTimestamp="2026-02-04 11:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:30.027435695 +0000 UTC m=+1139.170140090" watchObservedRunningTime="2026-02-04 11:46:30.040280882 +0000 UTC m=+1139.182985277" Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.059231 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.059211274 podStartE2EDuration="21.059211274s" podCreationTimestamp="2026-02-04 11:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:30.056037928 +0000 UTC m=+1139.198742313" watchObservedRunningTime="2026-02-04 11:46:30.059211274 +0000 UTC m=+1139.201915659" Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.097589 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cd5549f4d-zhk8w"] Feb 04 11:46:30 crc kubenswrapper[4728]: W0204 11:46:30.144904 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeab3e33_962c_46f9_ac60_a8a739d86cac.slice/crio-842aa7633d9953929b0644cc56d041bb8f9b052cc337e98ad32444485220866d WatchSource:0}: Error finding container 842aa7633d9953929b0644cc56d041bb8f9b052cc337e98ad32444485220866d: Status 404 returned error can't find the container with id 842aa7633d9953929b0644cc56d041bb8f9b052cc337e98ad32444485220866d Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.527185 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.527229 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.543301 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.543353 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.559371 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.580886 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.590025 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:30 crc kubenswrapper[4728]: I0204 11:46:30.595011 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.027448 4728 generic.go:334] "Generic (PLEG): container finished" podID="248d137a-3361-4df7-b7b6-c26cfb6c5a6d" containerID="e9e77c5c6160459ec8ccd30b6f9951208542784c0ad592e2b2eb68b4a817bc41" exitCode=0 Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.027508 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" event={"ID":"248d137a-3361-4df7-b7b6-c26cfb6c5a6d","Type":"ContainerDied","Data":"e9e77c5c6160459ec8ccd30b6f9951208542784c0ad592e2b2eb68b4a817bc41"} Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.039276 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cd5549f4d-zhk8w" event={"ID":"beab3e33-962c-46f9-ac60-a8a739d86cac","Type":"ContainerStarted","Data":"1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6"} Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.039325 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.039340 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.039349 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cd5549f4d-zhk8w" event={"ID":"beab3e33-962c-46f9-ac60-a8a739d86cac","Type":"ContainerStarted","Data":"842aa7633d9953929b0644cc56d041bb8f9b052cc337e98ad32444485220866d"} Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.039377 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.039613 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.079374 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-758cc66857-djf64"] Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.080825 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.083532 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.091041 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.106246 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-758cc66857-djf64"] Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.225469 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-combined-ca-bundle\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.226036 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-config\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.226088 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-httpd-config\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.226110 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6dqs\" (UniqueName: \"kubernetes.io/projected/f984d281-95b6-45be-abe8-d17370c22645-kube-api-access-d6dqs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.226206 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-ovndb-tls-certs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.226256 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-public-tls-certs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.226314 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-internal-tls-certs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.358157 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-httpd-config\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.358204 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6dqs\" (UniqueName: \"kubernetes.io/projected/f984d281-95b6-45be-abe8-d17370c22645-kube-api-access-d6dqs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.358254 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-ovndb-tls-certs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.358288 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-public-tls-certs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.358317 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-internal-tls-certs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.358426 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-combined-ca-bundle\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.358477 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-config\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.363832 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-httpd-config\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.367957 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-ovndb-tls-certs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.373032 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-config\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.377577 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-public-tls-certs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.378121 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-internal-tls-certs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.380724 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-combined-ca-bundle\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.396913 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6dqs\" (UniqueName: \"kubernetes.io/projected/f984d281-95b6-45be-abe8-d17370c22645-kube-api-access-d6dqs\") pod \"neutron-758cc66857-djf64\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.481597 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.630301 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xrxlt" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.769879 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-config-data\") pod \"0e456f3f-3163-48da-9e88-aaddade811b6\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.770106 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dmpm\" (UniqueName: \"kubernetes.io/projected/0e456f3f-3163-48da-9e88-aaddade811b6-kube-api-access-7dmpm\") pod \"0e456f3f-3163-48da-9e88-aaddade811b6\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.770146 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e456f3f-3163-48da-9e88-aaddade811b6-logs\") pod \"0e456f3f-3163-48da-9e88-aaddade811b6\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.770211 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-scripts\") pod \"0e456f3f-3163-48da-9e88-aaddade811b6\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.770304 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-combined-ca-bundle\") pod \"0e456f3f-3163-48da-9e88-aaddade811b6\" (UID: \"0e456f3f-3163-48da-9e88-aaddade811b6\") " Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.770819 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e456f3f-3163-48da-9e88-aaddade811b6-logs" (OuterVolumeSpecName: "logs") pod "0e456f3f-3163-48da-9e88-aaddade811b6" (UID: "0e456f3f-3163-48da-9e88-aaddade811b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.788039 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e456f3f-3163-48da-9e88-aaddade811b6-kube-api-access-7dmpm" (OuterVolumeSpecName: "kube-api-access-7dmpm") pod "0e456f3f-3163-48da-9e88-aaddade811b6" (UID: "0e456f3f-3163-48da-9e88-aaddade811b6"). InnerVolumeSpecName "kube-api-access-7dmpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.791657 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-scripts" (OuterVolumeSpecName: "scripts") pod "0e456f3f-3163-48da-9e88-aaddade811b6" (UID: "0e456f3f-3163-48da-9e88-aaddade811b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.799872 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e456f3f-3163-48da-9e88-aaddade811b6" (UID: "0e456f3f-3163-48da-9e88-aaddade811b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.801702 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-config-data" (OuterVolumeSpecName: "config-data") pod "0e456f3f-3163-48da-9e88-aaddade811b6" (UID: "0e456f3f-3163-48da-9e88-aaddade811b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.872554 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dmpm\" (UniqueName: \"kubernetes.io/projected/0e456f3f-3163-48da-9e88-aaddade811b6-kube-api-access-7dmpm\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.872582 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e456f3f-3163-48da-9e88-aaddade811b6-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.872592 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.872602 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:31 crc kubenswrapper[4728]: I0204 11:46:31.872610 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e456f3f-3163-48da-9e88-aaddade811b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.048106 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-758cc66857-djf64"] Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.060970 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xrxlt" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.060997 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xrxlt" event={"ID":"0e456f3f-3163-48da-9e88-aaddade811b6","Type":"ContainerDied","Data":"2ae858dce8f1056379ec0fa5a13dd860dfb001bcf49a88bebdd97e2b8bf745c5"} Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.061042 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae858dce8f1056379ec0fa5a13dd860dfb001bcf49a88bebdd97e2b8bf745c5" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.063856 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" event={"ID":"248d137a-3361-4df7-b7b6-c26cfb6c5a6d","Type":"ContainerStarted","Data":"75d1ad79fa9c745f131875b2de1237dc8b2878039885eb4ca4f721a5b0a6ed03"} Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.063999 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.070161 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cd5549f4d-zhk8w" event={"ID":"beab3e33-962c-46f9-ac60-a8a739d86cac","Type":"ContainerStarted","Data":"44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256"} Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.106838 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" podStartSLOduration=4.106818421 podStartE2EDuration="4.106818421s" podCreationTimestamp="2026-02-04 11:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:32.094684011 +0000 UTC m=+1141.237388426" watchObservedRunningTime="2026-02-04 11:46:32.106818421 +0000 UTC m=+1141.249522806" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.134420 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7cd5549f4d-zhk8w" podStartSLOduration=4.134398922 podStartE2EDuration="4.134398922s" podCreationTimestamp="2026-02-04 11:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:32.129256918 +0000 UTC m=+1141.271961303" watchObservedRunningTime="2026-02-04 11:46:32.134398922 +0000 UTC m=+1141.277103307" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.208770 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5dc5978b96-n87bc"] Feb 04 11:46:32 crc kubenswrapper[4728]: E0204 11:46:32.211145 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e456f3f-3163-48da-9e88-aaddade811b6" containerName="placement-db-sync" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.211237 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e456f3f-3163-48da-9e88-aaddade811b6" containerName="placement-db-sync" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.211470 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e456f3f-3163-48da-9e88-aaddade811b6" containerName="placement-db-sync" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.212426 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.214828 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.216149 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.216321 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.216507 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dc5978b96-n87bc"] Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.217240 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-g78z8" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.217566 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.281839 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-config-data\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.281940 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fq5r\" (UniqueName: \"kubernetes.io/projected/9983ef36-a557-4867-8d8f-a8f5d1b77eae-kube-api-access-7fq5r\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.281990 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-internal-tls-certs\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.282055 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-combined-ca-bundle\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.282144 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9983ef36-a557-4867-8d8f-a8f5d1b77eae-logs\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.282169 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-public-tls-certs\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.282224 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-scripts\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.383905 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-scripts\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.384284 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-config-data\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.384338 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fq5r\" (UniqueName: \"kubernetes.io/projected/9983ef36-a557-4867-8d8f-a8f5d1b77eae-kube-api-access-7fq5r\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.384379 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-internal-tls-certs\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.384429 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-combined-ca-bundle\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.384492 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9983ef36-a557-4867-8d8f-a8f5d1b77eae-logs\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.384520 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-public-tls-certs\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.386615 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9983ef36-a557-4867-8d8f-a8f5d1b77eae-logs\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.389092 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-public-tls-certs\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.390907 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-config-data\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.393802 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-internal-tls-certs\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.393919 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-scripts\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.395378 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-combined-ca-bundle\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.411435 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fq5r\" (UniqueName: \"kubernetes.io/projected/9983ef36-a557-4867-8d8f-a8f5d1b77eae-kube-api-access-7fq5r\") pod \"placement-5dc5978b96-n87bc\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:32 crc kubenswrapper[4728]: I0204 11:46:32.531858 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:33 crc kubenswrapper[4728]: I0204 11:46:33.084533 4728 generic.go:334] "Generic (PLEG): container finished" podID="e8339046-9234-4489-b308-c592a7afa3b0" containerID="b6aed950b1defebc26df71094e7110da169177e75b9abd57da391ef1a323cfea" exitCode=0 Feb 04 11:46:33 crc kubenswrapper[4728]: I0204 11:46:33.084609 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z5s6t" event={"ID":"e8339046-9234-4489-b308-c592a7afa3b0","Type":"ContainerDied","Data":"b6aed950b1defebc26df71094e7110da169177e75b9abd57da391ef1a323cfea"} Feb 04 11:46:33 crc kubenswrapper[4728]: I0204 11:46:33.089876 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-758cc66857-djf64" event={"ID":"f984d281-95b6-45be-abe8-d17370c22645","Type":"ContainerStarted","Data":"d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a"} Feb 04 11:46:33 crc kubenswrapper[4728]: I0204 11:46:33.089917 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-758cc66857-djf64" Feb 04 11:46:33 crc kubenswrapper[4728]: I0204 11:46:33.089928 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-758cc66857-djf64" event={"ID":"f984d281-95b6-45be-abe8-d17370c22645","Type":"ContainerStarted","Data":"0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b"} Feb 04 11:46:33 crc kubenswrapper[4728]: I0204 11:46:33.089937 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-758cc66857-djf64" event={"ID":"f984d281-95b6-45be-abe8-d17370c22645","Type":"ContainerStarted","Data":"66682683a5ffc570e5872a5b2984ad8dbe582591f87dc9a5bb420487e07754fb"} Feb 04 11:46:33 crc kubenswrapper[4728]: I0204 11:46:33.090423 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:33 crc kubenswrapper[4728]: I0204 11:46:33.155050 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-758cc66857-djf64" podStartSLOduration=2.155034999 podStartE2EDuration="2.155034999s" podCreationTimestamp="2026-02-04 11:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:33.147807305 +0000 UTC m=+1142.290511690" watchObservedRunningTime="2026-02-04 11:46:33.155034999 +0000 UTC m=+1142.297739384" Feb 04 11:46:35 crc kubenswrapper[4728]: I0204 11:46:35.449136 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:46:35 crc kubenswrapper[4728]: I0204 11:46:35.449714 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:46:35 crc kubenswrapper[4728]: I0204 11:46:35.938920 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:35 crc kubenswrapper[4728]: I0204 11:46:35.990386 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.116650 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-z5s6t" event={"ID":"e8339046-9234-4489-b308-c592a7afa3b0","Type":"ContainerDied","Data":"5439e3f5eea8f6a1d7113c103df8b62a696a451b0585786fef502354c513c932"} Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.116740 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5439e3f5eea8f6a1d7113c103df8b62a696a451b0585786fef502354c513c932" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.147117 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.268502 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-config-data\") pod \"e8339046-9234-4489-b308-c592a7afa3b0\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.268552 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-fernet-keys\") pod \"e8339046-9234-4489-b308-c592a7afa3b0\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.268647 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-credential-keys\") pod \"e8339046-9234-4489-b308-c592a7afa3b0\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.268765 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l25pn\" (UniqueName: \"kubernetes.io/projected/e8339046-9234-4489-b308-c592a7afa3b0-kube-api-access-l25pn\") pod \"e8339046-9234-4489-b308-c592a7afa3b0\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.268832 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-combined-ca-bundle\") pod \"e8339046-9234-4489-b308-c592a7afa3b0\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.268868 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-scripts\") pod \"e8339046-9234-4489-b308-c592a7afa3b0\" (UID: \"e8339046-9234-4489-b308-c592a7afa3b0\") " Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.274009 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-scripts" (OuterVolumeSpecName: "scripts") pod "e8339046-9234-4489-b308-c592a7afa3b0" (UID: "e8339046-9234-4489-b308-c592a7afa3b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.274390 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8339046-9234-4489-b308-c592a7afa3b0-kube-api-access-l25pn" (OuterVolumeSpecName: "kube-api-access-l25pn") pod "e8339046-9234-4489-b308-c592a7afa3b0" (UID: "e8339046-9234-4489-b308-c592a7afa3b0"). InnerVolumeSpecName "kube-api-access-l25pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.276551 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e8339046-9234-4489-b308-c592a7afa3b0" (UID: "e8339046-9234-4489-b308-c592a7afa3b0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.277358 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e8339046-9234-4489-b308-c592a7afa3b0" (UID: "e8339046-9234-4489-b308-c592a7afa3b0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.293457 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-config-data" (OuterVolumeSpecName: "config-data") pod "e8339046-9234-4489-b308-c592a7afa3b0" (UID: "e8339046-9234-4489-b308-c592a7afa3b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.304227 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8339046-9234-4489-b308-c592a7afa3b0" (UID: "e8339046-9234-4489-b308-c592a7afa3b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.371137 4728 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.371190 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l25pn\" (UniqueName: \"kubernetes.io/projected/e8339046-9234-4489-b308-c592a7afa3b0-kube-api-access-l25pn\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.371204 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.371217 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.371229 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.371242 4728 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8339046-9234-4489-b308-c592a7afa3b0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:36 crc kubenswrapper[4728]: I0204 11:46:36.421445 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dc5978b96-n87bc"] Feb 04 11:46:36 crc kubenswrapper[4728]: W0204 11:46:36.422204 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9983ef36_a557_4867_8d8f_a8f5d1b77eae.slice/crio-5df8f9e4da0082b9367870687116e830deda4b3aa19b3bdd91df16e6b4f9cbfd WatchSource:0}: Error finding container 5df8f9e4da0082b9367870687116e830deda4b3aa19b3bdd91df16e6b4f9cbfd: Status 404 returned error can't find the container with id 5df8f9e4da0082b9367870687116e830deda4b3aa19b3bdd91df16e6b4f9cbfd Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.128050 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20df29d2-926c-4921-a7f2-eac948556d19","Type":"ContainerStarted","Data":"c995ca026679f71df4605f78f2affc6d9d7c2b61493833bba6c82236d6a0c02f"} Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.131325 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dc5978b96-n87bc" event={"ID":"9983ef36-a557-4867-8d8f-a8f5d1b77eae","Type":"ContainerStarted","Data":"9db4a3eec0c8e4ebd6f9beb6bc3a0893a1f8cd3fc932cd01bd3ef7171e3027ab"} Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.131414 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dc5978b96-n87bc" event={"ID":"9983ef36-a557-4867-8d8f-a8f5d1b77eae","Type":"ContainerStarted","Data":"82d3e4c09c91ea5db6c7ee0188289ff2b3bca192627d1c1451b315d996b937f7"} Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.131359 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-z5s6t" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.131456 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.131892 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.131964 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dc5978b96-n87bc" event={"ID":"9983ef36-a557-4867-8d8f-a8f5d1b77eae","Type":"ContainerStarted","Data":"5df8f9e4da0082b9367870687116e830deda4b3aa19b3bdd91df16e6b4f9cbfd"} Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.159773 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5dc5978b96-n87bc" podStartSLOduration=5.159724367 podStartE2EDuration="5.159724367s" podCreationTimestamp="2026-02-04 11:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:37.153249521 +0000 UTC m=+1146.295953916" watchObservedRunningTime="2026-02-04 11:46:37.159724367 +0000 UTC m=+1146.302428742" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.290813 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74d87669cb-xsvws"] Feb 04 11:46:37 crc kubenswrapper[4728]: E0204 11:46:37.293468 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8339046-9234-4489-b308-c592a7afa3b0" containerName="keystone-bootstrap" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.293502 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8339046-9234-4489-b308-c592a7afa3b0" containerName="keystone-bootstrap" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.293689 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8339046-9234-4489-b308-c592a7afa3b0" containerName="keystone-bootstrap" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.294304 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.305216 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74d87669cb-xsvws"] Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.306355 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.306706 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.307952 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-prll6" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.309108 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.309371 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.315268 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.388431 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-internal-tls-certs\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.388491 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-credential-keys\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.388519 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-combined-ca-bundle\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.388543 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-scripts\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.388612 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-config-data\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.388639 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqdc2\" (UniqueName: \"kubernetes.io/projected/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-kube-api-access-qqdc2\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.388698 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-fernet-keys\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.388885 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-public-tls-certs\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.490279 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-internal-tls-certs\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.490350 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-credential-keys\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.490376 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-combined-ca-bundle\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.490400 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-scripts\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.490433 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-config-data\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.490462 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqdc2\" (UniqueName: \"kubernetes.io/projected/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-kube-api-access-qqdc2\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.490523 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-fernet-keys\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.490551 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-public-tls-certs\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.497176 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-internal-tls-certs\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.498179 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-combined-ca-bundle\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.499747 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-credential-keys\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.500744 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-config-data\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.509566 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-public-tls-certs\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.510559 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-fernet-keys\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.510607 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-scripts\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.513973 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqdc2\" (UniqueName: \"kubernetes.io/projected/74a69221-1b4f-4ac9-bec8-82fd3cb462c9-kube-api-access-qqdc2\") pod \"keystone-74d87669cb-xsvws\" (UID: \"74a69221-1b4f-4ac9-bec8-82fd3cb462c9\") " pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:37 crc kubenswrapper[4728]: I0204 11:46:37.612172 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:38 crc kubenswrapper[4728]: I0204 11:46:38.049890 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74d87669cb-xsvws"] Feb 04 11:46:38 crc kubenswrapper[4728]: I0204 11:46:38.142287 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74d87669cb-xsvws" event={"ID":"74a69221-1b4f-4ac9-bec8-82fd3cb462c9","Type":"ContainerStarted","Data":"0555a6263229fa9658b5d8844bd95107e308c8c7728b3667a2c018fcbc26f47b"} Feb 04 11:46:39 crc kubenswrapper[4728]: I0204 11:46:39.071497 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:46:39 crc kubenswrapper[4728]: I0204 11:46:39.132299 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z7mz4"] Feb 04 11:46:39 crc kubenswrapper[4728]: I0204 11:46:39.132791 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" podUID="6411322f-ced0-457e-9f0f-61f37755a0b5" containerName="dnsmasq-dns" containerID="cri-o://acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563" gracePeriod=10 Feb 04 11:46:39 crc kubenswrapper[4728]: I0204 11:46:39.154901 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74d87669cb-xsvws" event={"ID":"74a69221-1b4f-4ac9-bec8-82fd3cb462c9","Type":"ContainerStarted","Data":"de832e7e685a880a38ba542190a066140ff0a7765bf6853f5575f993a3490849"} Feb 04 11:46:39 crc kubenswrapper[4728]: I0204 11:46:39.156010 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:46:39 crc kubenswrapper[4728]: I0204 11:46:39.175825 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-74d87669cb-xsvws" podStartSLOduration=2.175804779 podStartE2EDuration="2.175804779s" podCreationTimestamp="2026-02-04 11:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:39.174218131 +0000 UTC m=+1148.316922516" watchObservedRunningTime="2026-02-04 11:46:39.175804779 +0000 UTC m=+1148.318509164" Feb 04 11:46:39 crc kubenswrapper[4728]: I0204 11:46:39.558003 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" podUID="6411322f-ced0-457e-9f0f-61f37755a0b5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: connect: connection refused" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.107851 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.149588 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-config\") pod \"6411322f-ced0-457e-9f0f-61f37755a0b5\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.149658 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-svc\") pod \"6411322f-ced0-457e-9f0f-61f37755a0b5\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.149676 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-swift-storage-0\") pod \"6411322f-ced0-457e-9f0f-61f37755a0b5\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.149712 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-sb\") pod \"6411322f-ced0-457e-9f0f-61f37755a0b5\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.149740 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-nb\") pod \"6411322f-ced0-457e-9f0f-61f37755a0b5\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.149779 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlvfn\" (UniqueName: \"kubernetes.io/projected/6411322f-ced0-457e-9f0f-61f37755a0b5-kube-api-access-vlvfn\") pod \"6411322f-ced0-457e-9f0f-61f37755a0b5\" (UID: \"6411322f-ced0-457e-9f0f-61f37755a0b5\") " Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.156716 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6411322f-ced0-457e-9f0f-61f37755a0b5-kube-api-access-vlvfn" (OuterVolumeSpecName: "kube-api-access-vlvfn") pod "6411322f-ced0-457e-9f0f-61f37755a0b5" (UID: "6411322f-ced0-457e-9f0f-61f37755a0b5"). InnerVolumeSpecName "kube-api-access-vlvfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.178930 4728 generic.go:334] "Generic (PLEG): container finished" podID="6411322f-ced0-457e-9f0f-61f37755a0b5" containerID="acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563" exitCode=0 Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.179716 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.180398 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" event={"ID":"6411322f-ced0-457e-9f0f-61f37755a0b5","Type":"ContainerDied","Data":"acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563"} Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.180443 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-z7mz4" event={"ID":"6411322f-ced0-457e-9f0f-61f37755a0b5","Type":"ContainerDied","Data":"54efd24bfe81b9751c4c3dc88c29f9e2d60bd6a08ea9311af09ef06b7d306486"} Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.180459 4728 scope.go:117] "RemoveContainer" containerID="acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.242431 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-config" (OuterVolumeSpecName: "config") pod "6411322f-ced0-457e-9f0f-61f37755a0b5" (UID: "6411322f-ced0-457e-9f0f-61f37755a0b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.245316 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6411322f-ced0-457e-9f0f-61f37755a0b5" (UID: "6411322f-ced0-457e-9f0f-61f37755a0b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.246518 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6411322f-ced0-457e-9f0f-61f37755a0b5" (UID: "6411322f-ced0-457e-9f0f-61f37755a0b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.252030 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlvfn\" (UniqueName: \"kubernetes.io/projected/6411322f-ced0-457e-9f0f-61f37755a0b5-kube-api-access-vlvfn\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.252057 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.252068 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.252076 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.253166 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6411322f-ced0-457e-9f0f-61f37755a0b5" (UID: "6411322f-ced0-457e-9f0f-61f37755a0b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.267240 4728 scope.go:117] "RemoveContainer" containerID="e74586d56fe3fc4c449831ae1afd16fc145e8dedb33d99b738a7a3ae40b89d08" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.268371 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6411322f-ced0-457e-9f0f-61f37755a0b5" (UID: "6411322f-ced0-457e-9f0f-61f37755a0b5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.327794 4728 scope.go:117] "RemoveContainer" containerID="acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563" Feb 04 11:46:40 crc kubenswrapper[4728]: E0204 11:46:40.328298 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563\": container with ID starting with acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563 not found: ID does not exist" containerID="acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.328334 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563"} err="failed to get container status \"acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563\": rpc error: code = NotFound desc = could not find container \"acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563\": container with ID starting with acc5e972488b62b93d1e9aa7501c7aaaff4ad095ab2d03dac6264ef6a3b35563 not found: ID does not exist" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.328360 4728 scope.go:117] "RemoveContainer" containerID="e74586d56fe3fc4c449831ae1afd16fc145e8dedb33d99b738a7a3ae40b89d08" Feb 04 11:46:40 crc kubenswrapper[4728]: E0204 11:46:40.328652 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74586d56fe3fc4c449831ae1afd16fc145e8dedb33d99b738a7a3ae40b89d08\": container with ID starting with e74586d56fe3fc4c449831ae1afd16fc145e8dedb33d99b738a7a3ae40b89d08 not found: ID does not exist" containerID="e74586d56fe3fc4c449831ae1afd16fc145e8dedb33d99b738a7a3ae40b89d08" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.328677 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74586d56fe3fc4c449831ae1afd16fc145e8dedb33d99b738a7a3ae40b89d08"} err="failed to get container status \"e74586d56fe3fc4c449831ae1afd16fc145e8dedb33d99b738a7a3ae40b89d08\": rpc error: code = NotFound desc = could not find container \"e74586d56fe3fc4c449831ae1afd16fc145e8dedb33d99b738a7a3ae40b89d08\": container with ID starting with e74586d56fe3fc4c449831ae1afd16fc145e8dedb33d99b738a7a3ae40b89d08 not found: ID does not exist" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.353829 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.353960 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6411322f-ced0-457e-9f0f-61f37755a0b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.521734 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z7mz4"] Feb 04 11:46:40 crc kubenswrapper[4728]: I0204 11:46:40.533309 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z7mz4"] Feb 04 11:46:41 crc kubenswrapper[4728]: I0204 11:46:41.191296 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nqv8q" event={"ID":"aad93666-c664-4ded-8970-993c847ac437","Type":"ContainerStarted","Data":"d9374a6fcb36da146d33383f340663934c808b4ad02b8eb86ccf4b5daef8e937"} Feb 04 11:46:41 crc kubenswrapper[4728]: I0204 11:46:41.217983 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-nqv8q" podStartSLOduration=3.18853449 podStartE2EDuration="42.217961825s" podCreationTimestamp="2026-02-04 11:45:59 +0000 UTC" firstStartedPulling="2026-02-04 11:46:00.946547557 +0000 UTC m=+1110.089251942" lastFinishedPulling="2026-02-04 11:46:39.975974892 +0000 UTC m=+1149.118679277" observedRunningTime="2026-02-04 11:46:41.217055354 +0000 UTC m=+1150.359759749" watchObservedRunningTime="2026-02-04 11:46:41.217961825 +0000 UTC m=+1150.360666210" Feb 04 11:46:41 crc kubenswrapper[4728]: I0204 11:46:41.564480 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6411322f-ced0-457e-9f0f-61f37755a0b5" path="/var/lib/kubelet/pods/6411322f-ced0-457e-9f0f-61f37755a0b5/volumes" Feb 04 11:46:42 crc kubenswrapper[4728]: I0204 11:46:42.655053 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 04 11:46:42 crc kubenswrapper[4728]: I0204 11:46:42.760094 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 04 11:46:44 crc kubenswrapper[4728]: I0204 11:46:44.218690 4728 generic.go:334] "Generic (PLEG): container finished" podID="aad93666-c664-4ded-8970-993c847ac437" containerID="d9374a6fcb36da146d33383f340663934c808b4ad02b8eb86ccf4b5daef8e937" exitCode=0 Feb 04 11:46:44 crc kubenswrapper[4728]: I0204 11:46:44.218780 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nqv8q" event={"ID":"aad93666-c664-4ded-8970-993c847ac437","Type":"ContainerDied","Data":"d9374a6fcb36da146d33383f340663934c808b4ad02b8eb86ccf4b5daef8e937"} Feb 04 11:46:45 crc kubenswrapper[4728]: I0204 11:46:45.959774 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:46:45 crc kubenswrapper[4728]: I0204 11:46:45.963231 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-combined-ca-bundle\") pod \"aad93666-c664-4ded-8970-993c847ac437\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " Feb 04 11:46:45 crc kubenswrapper[4728]: I0204 11:46:45.963301 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-db-sync-config-data\") pod \"aad93666-c664-4ded-8970-993c847ac437\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " Feb 04 11:46:45 crc kubenswrapper[4728]: I0204 11:46:45.963356 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j62df\" (UniqueName: \"kubernetes.io/projected/aad93666-c664-4ded-8970-993c847ac437-kube-api-access-j62df\") pod \"aad93666-c664-4ded-8970-993c847ac437\" (UID: \"aad93666-c664-4ded-8970-993c847ac437\") " Feb 04 11:46:45 crc kubenswrapper[4728]: I0204 11:46:45.970240 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "aad93666-c664-4ded-8970-993c847ac437" (UID: "aad93666-c664-4ded-8970-993c847ac437"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:45 crc kubenswrapper[4728]: I0204 11:46:45.970589 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad93666-c664-4ded-8970-993c847ac437-kube-api-access-j62df" (OuterVolumeSpecName: "kube-api-access-j62df") pod "aad93666-c664-4ded-8970-993c847ac437" (UID: "aad93666-c664-4ded-8970-993c847ac437"). InnerVolumeSpecName "kube-api-access-j62df". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:45 crc kubenswrapper[4728]: I0204 11:46:45.994860 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aad93666-c664-4ded-8970-993c847ac437" (UID: "aad93666-c664-4ded-8970-993c847ac437"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.064842 4728 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.064902 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j62df\" (UniqueName: \"kubernetes.io/projected/aad93666-c664-4ded-8970-993c847ac437-kube-api-access-j62df\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.064917 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad93666-c664-4ded-8970-993c847ac437-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.233264 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nqv8q" event={"ID":"aad93666-c664-4ded-8970-993c847ac437","Type":"ContainerDied","Data":"e0f597f4edf900812cd38c682abea6f28b79238a417dcd67dde7b06129d9662c"} Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.233583 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0f597f4edf900812cd38c682abea6f28b79238a417dcd67dde7b06129d9662c" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.233336 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nqv8q" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.669415 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-55445887dc-67kkb"] Feb 04 11:46:46 crc kubenswrapper[4728]: E0204 11:46:46.669933 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6411322f-ced0-457e-9f0f-61f37755a0b5" containerName="dnsmasq-dns" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.669953 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6411322f-ced0-457e-9f0f-61f37755a0b5" containerName="dnsmasq-dns" Feb 04 11:46:46 crc kubenswrapper[4728]: E0204 11:46:46.669971 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad93666-c664-4ded-8970-993c847ac437" containerName="barbican-db-sync" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.669979 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad93666-c664-4ded-8970-993c847ac437" containerName="barbican-db-sync" Feb 04 11:46:46 crc kubenswrapper[4728]: E0204 11:46:46.669992 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6411322f-ced0-457e-9f0f-61f37755a0b5" containerName="init" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.669999 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6411322f-ced0-457e-9f0f-61f37755a0b5" containerName="init" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.670253 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6411322f-ced0-457e-9f0f-61f37755a0b5" containerName="dnsmasq-dns" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.670273 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad93666-c664-4ded-8970-993c847ac437" containerName="barbican-db-sync" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.674552 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.680578 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.680855 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.681147 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xp6sr" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.696609 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55445887dc-67kkb"] Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.783805 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-597658645d-gglvr"] Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.792316 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a809eec-ba73-4746-a976-e43e762c78c0-logs\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.792364 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a809eec-ba73-4746-a976-e43e762c78c0-config-data-custom\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.792414 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a809eec-ba73-4746-a976-e43e762c78c0-combined-ca-bundle\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.792465 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rts9z\" (UniqueName: \"kubernetes.io/projected/8a809eec-ba73-4746-a976-e43e762c78c0-kube-api-access-rts9z\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.792511 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a809eec-ba73-4746-a976-e43e762c78c0-config-data\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.842435 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.847149 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.858782 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-597658645d-gglvr"] Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.938237 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a809eec-ba73-4746-a976-e43e762c78c0-combined-ca-bundle\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.938586 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rts9z\" (UniqueName: \"kubernetes.io/projected/8a809eec-ba73-4746-a976-e43e762c78c0-kube-api-access-rts9z\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.938644 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a809eec-ba73-4746-a976-e43e762c78c0-config-data\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.938712 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a809eec-ba73-4746-a976-e43e762c78c0-logs\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.938734 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a809eec-ba73-4746-a976-e43e762c78c0-config-data-custom\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.948605 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6z8lr"] Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.950032 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.952155 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a809eec-ba73-4746-a976-e43e762c78c0-logs\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.962429 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a809eec-ba73-4746-a976-e43e762c78c0-config-data-custom\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.962511 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6z8lr"] Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.965559 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a809eec-ba73-4746-a976-e43e762c78c0-config-data\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.969953 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a809eec-ba73-4746-a976-e43e762c78c0-combined-ca-bundle\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.985094 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6db974ddcd-24hrm"] Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.991093 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.994281 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.997804 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rts9z\" (UniqueName: \"kubernetes.io/projected/8a809eec-ba73-4746-a976-e43e762c78c0-kube-api-access-rts9z\") pod \"barbican-worker-55445887dc-67kkb\" (UID: \"8a809eec-ba73-4746-a976-e43e762c78c0\") " pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:46 crc kubenswrapper[4728]: I0204 11:46:46.999366 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55445887dc-67kkb" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.020998 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6db974ddcd-24hrm"] Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.040566 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.040634 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.040775 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxprq\" (UniqueName: \"kubernetes.io/projected/a9a14765-4f8e-445d-a260-28ce443609b8-kube-api-access-cxprq\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.040847 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ccfq\" (UniqueName: \"kubernetes.io/projected/7d9f2a7f-3218-4705-862e-9ab115f73023-kube-api-access-5ccfq\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.041099 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.041217 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9a14765-4f8e-445d-a260-28ce443609b8-combined-ca-bundle\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.041295 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9a14765-4f8e-445d-a260-28ce443609b8-logs\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.041343 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9a14765-4f8e-445d-a260-28ce443609b8-config-data\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.041374 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9a14765-4f8e-445d-a260-28ce443609b8-config-data-custom\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.041431 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-config\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.041454 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-svc\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: E0204 11:46:47.117273 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="20df29d2-926c-4921-a7f2-eac948556d19" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.144800 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9a14765-4f8e-445d-a260-28ce443609b8-combined-ca-bundle\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.144878 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9a14765-4f8e-445d-a260-28ce443609b8-logs\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.144901 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9a14765-4f8e-445d-a260-28ce443609b8-config-data\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.144950 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9a14765-4f8e-445d-a260-28ce443609b8-config-data-custom\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.144978 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-config\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.145014 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-svc\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.145039 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-combined-ca-bundle\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.145057 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.145094 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.145123 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.145150 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxprq\" (UniqueName: \"kubernetes.io/projected/a9a14765-4f8e-445d-a260-28ce443609b8-kube-api-access-cxprq\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.145191 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ccfq\" (UniqueName: \"kubernetes.io/projected/7d9f2a7f-3218-4705-862e-9ab115f73023-kube-api-access-5ccfq\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.145276 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data-custom\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.145308 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bffxh\" (UniqueName: \"kubernetes.io/projected/7b4c4df9-ba39-445b-8a27-e5bb17be0079-kube-api-access-bffxh\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.145349 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.145384 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b4c4df9-ba39-445b-8a27-e5bb17be0079-logs\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.146076 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9a14765-4f8e-445d-a260-28ce443609b8-logs\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.147683 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.148987 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-config\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.149126 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-svc\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.150210 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.150712 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9a14765-4f8e-445d-a260-28ce443609b8-config-data\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.151652 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9a14765-4f8e-445d-a260-28ce443609b8-combined-ca-bundle\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.152008 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.152820 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9a14765-4f8e-445d-a260-28ce443609b8-config-data-custom\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.166907 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ccfq\" (UniqueName: \"kubernetes.io/projected/7d9f2a7f-3218-4705-862e-9ab115f73023-kube-api-access-5ccfq\") pod \"dnsmasq-dns-85ff748b95-6z8lr\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.167323 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxprq\" (UniqueName: \"kubernetes.io/projected/a9a14765-4f8e-445d-a260-28ce443609b8-kube-api-access-cxprq\") pod \"barbican-keystone-listener-597658645d-gglvr\" (UID: \"a9a14765-4f8e-445d-a260-28ce443609b8\") " pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.180482 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.247379 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.247513 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data-custom\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.247553 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bffxh\" (UniqueName: \"kubernetes.io/projected/7b4c4df9-ba39-445b-8a27-e5bb17be0079-kube-api-access-bffxh\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.247600 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b4c4df9-ba39-445b-8a27-e5bb17be0079-logs\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.247677 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-combined-ca-bundle\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.249874 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b4c4df9-ba39-445b-8a27-e5bb17be0079-logs\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.250118 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-597658645d-gglvr" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.252541 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-combined-ca-bundle\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.256342 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data-custom\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.263268 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.275670 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20df29d2-926c-4921-a7f2-eac948556d19","Type":"ContainerStarted","Data":"f6b2a561332c0e0dc85d4a235e6a79cf689fd8f73820e578d9b4086a5a297758"} Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.276031 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="ceilometer-notification-agent" containerID="cri-o://43a4bb62c8185cd8f14c958b8c82a3c1c6d6b5e4155022dae5323ea4c9d7ae26" gracePeriod=30 Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.276174 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.276615 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="proxy-httpd" containerID="cri-o://f6b2a561332c0e0dc85d4a235e6a79cf689fd8f73820e578d9b4086a5a297758" gracePeriod=30 Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.276732 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="sg-core" containerID="cri-o://c995ca026679f71df4605f78f2affc6d9d7c2b61493833bba6c82236d6a0c02f" gracePeriod=30 Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.298322 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bffxh\" (UniqueName: \"kubernetes.io/projected/7b4c4df9-ba39-445b-8a27-e5bb17be0079-kube-api-access-bffxh\") pod \"barbican-api-6db974ddcd-24hrm\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.314379 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jx9lp" event={"ID":"2186aabd-28ff-488a-a224-01c14710adac","Type":"ContainerStarted","Data":"7106a66ab3aad23ff83211ffa2347875488e4e58c79e3fe0cb3d4223bea9d26e"} Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.336419 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-jx9lp" podStartSLOduration=2.686578557 podStartE2EDuration="49.336394992s" podCreationTimestamp="2026-02-04 11:45:58 +0000 UTC" firstStartedPulling="2026-02-04 11:45:59.77336271 +0000 UTC m=+1108.916067095" lastFinishedPulling="2026-02-04 11:46:46.423179145 +0000 UTC m=+1155.565883530" observedRunningTime="2026-02-04 11:46:47.327192191 +0000 UTC m=+1156.469896606" watchObservedRunningTime="2026-02-04 11:46:47.336394992 +0000 UTC m=+1156.479099377" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.513319 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.521279 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55445887dc-67kkb"] Feb 04 11:46:47 crc kubenswrapper[4728]: I0204 11:46:47.723107 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6z8lr"] Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.050534 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-597658645d-gglvr"] Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.148794 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6db974ddcd-24hrm"] Feb 04 11:46:48 crc kubenswrapper[4728]: W0204 11:46:48.180835 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b4c4df9_ba39_445b_8a27_e5bb17be0079.slice/crio-67b304a861df0a4940c6ec0b92810969466755e7a74840799c8627a0ffb32d6a WatchSource:0}: Error finding container 67b304a861df0a4940c6ec0b92810969466755e7a74840799c8627a0ffb32d6a: Status 404 returned error can't find the container with id 67b304a861df0a4940c6ec0b92810969466755e7a74840799c8627a0ffb32d6a Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.326085 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6db974ddcd-24hrm" event={"ID":"7b4c4df9-ba39-445b-8a27-e5bb17be0079","Type":"ContainerStarted","Data":"67b304a861df0a4940c6ec0b92810969466755e7a74840799c8627a0ffb32d6a"} Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.327838 4728 generic.go:334] "Generic (PLEG): container finished" podID="7d9f2a7f-3218-4705-862e-9ab115f73023" containerID="2c23284725bf6d2a084ca5584455ae82bec838ec39a993c9e153626379034265" exitCode=0 Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.327904 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" event={"ID":"7d9f2a7f-3218-4705-862e-9ab115f73023","Type":"ContainerDied","Data":"2c23284725bf6d2a084ca5584455ae82bec838ec39a993c9e153626379034265"} Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.327928 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" event={"ID":"7d9f2a7f-3218-4705-862e-9ab115f73023","Type":"ContainerStarted","Data":"9b10c31d1573e55ce8a0ba7fc30bf7af9b7bf76f7b2021cebfa0f2752f9bd8a2"} Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.341345 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55445887dc-67kkb" event={"ID":"8a809eec-ba73-4746-a976-e43e762c78c0","Type":"ContainerStarted","Data":"5f681399c79d8d13146674e4297a3576f566725463c2a9e5ac0d67981f2db7ef"} Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.343857 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kgzbm" event={"ID":"d949b343-bfde-4d50-81b1-a7c66765c076","Type":"ContainerStarted","Data":"974db8f25fb6bef41dc01e78218ecc1a75f18ef547a5ac411dea800b4a63e201"} Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.345781 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-597658645d-gglvr" event={"ID":"a9a14765-4f8e-445d-a260-28ce443609b8","Type":"ContainerStarted","Data":"9fcf89a3326f7b967f9cb14d9160c6496689c6cc060fe08ddcb79b41b8208f2e"} Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.374964 4728 generic.go:334] "Generic (PLEG): container finished" podID="20df29d2-926c-4921-a7f2-eac948556d19" containerID="f6b2a561332c0e0dc85d4a235e6a79cf689fd8f73820e578d9b4086a5a297758" exitCode=0 Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.375015 4728 generic.go:334] "Generic (PLEG): container finished" podID="20df29d2-926c-4921-a7f2-eac948556d19" containerID="c995ca026679f71df4605f78f2affc6d9d7c2b61493833bba6c82236d6a0c02f" exitCode=2 Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.375114 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20df29d2-926c-4921-a7f2-eac948556d19","Type":"ContainerDied","Data":"f6b2a561332c0e0dc85d4a235e6a79cf689fd8f73820e578d9b4086a5a297758"} Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.375144 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20df29d2-926c-4921-a7f2-eac948556d19","Type":"ContainerDied","Data":"c995ca026679f71df4605f78f2affc6d9d7c2b61493833bba6c82236d6a0c02f"} Feb 04 11:46:48 crc kubenswrapper[4728]: I0204 11:46:48.407695 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kgzbm" podStartSLOduration=4.15762789 podStartE2EDuration="50.407671111s" podCreationTimestamp="2026-02-04 11:45:58 +0000 UTC" firstStartedPulling="2026-02-04 11:46:00.173097793 +0000 UTC m=+1109.315802178" lastFinishedPulling="2026-02-04 11:46:46.423141014 +0000 UTC m=+1155.565845399" observedRunningTime="2026-02-04 11:46:48.397420096 +0000 UTC m=+1157.540124481" watchObservedRunningTime="2026-02-04 11:46:48.407671111 +0000 UTC m=+1157.550375496" Feb 04 11:46:49 crc kubenswrapper[4728]: I0204 11:46:49.422309 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55445887dc-67kkb" event={"ID":"8a809eec-ba73-4746-a976-e43e762c78c0","Type":"ContainerStarted","Data":"4674d11213a6dd767c3d716ec6e20f02b65ef707ed6167e20df2f5635671554d"} Feb 04 11:46:49 crc kubenswrapper[4728]: I0204 11:46:49.425590 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6db974ddcd-24hrm" event={"ID":"7b4c4df9-ba39-445b-8a27-e5bb17be0079","Type":"ContainerStarted","Data":"e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826"} Feb 04 11:46:49 crc kubenswrapper[4728]: I0204 11:46:49.425643 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6db974ddcd-24hrm" event={"ID":"7b4c4df9-ba39-445b-8a27-e5bb17be0079","Type":"ContainerStarted","Data":"a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4"} Feb 04 11:46:49 crc kubenswrapper[4728]: I0204 11:46:49.426854 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:49 crc kubenswrapper[4728]: I0204 11:46:49.426881 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:46:49 crc kubenswrapper[4728]: I0204 11:46:49.430060 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" event={"ID":"7d9f2a7f-3218-4705-862e-9ab115f73023","Type":"ContainerStarted","Data":"27682c34e67c35ffee76a17d5bfe74d34b8600ee6b5a9fdddb9f06001a5955da"} Feb 04 11:46:49 crc kubenswrapper[4728]: I0204 11:46:49.430842 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:49 crc kubenswrapper[4728]: I0204 11:46:49.456988 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6db974ddcd-24hrm" podStartSLOduration=3.456970915 podStartE2EDuration="3.456970915s" podCreationTimestamp="2026-02-04 11:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:49.446156786 +0000 UTC m=+1158.588861171" watchObservedRunningTime="2026-02-04 11:46:49.456970915 +0000 UTC m=+1158.599675300" Feb 04 11:46:49 crc kubenswrapper[4728]: I0204 11:46:49.474009 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" podStartSLOduration=3.4739925720000002 podStartE2EDuration="3.473992572s" podCreationTimestamp="2026-02-04 11:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:49.465433827 +0000 UTC m=+1158.608138232" watchObservedRunningTime="2026-02-04 11:46:49.473992572 +0000 UTC m=+1158.616696957" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.111901 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56d59c75d6-qgl25"] Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.113958 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.116228 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.116490 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.125826 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56d59c75d6-qgl25"] Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.217382 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nf4b\" (UniqueName: \"kubernetes.io/projected/ceeca3bd-824e-4b51-a536-6ae20911faa9-kube-api-access-2nf4b\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.217459 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-config-data-custom\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.217674 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-config-data\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.217716 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-combined-ca-bundle\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.217842 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-internal-tls-certs\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.217878 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-public-tls-certs\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.217950 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceeca3bd-824e-4b51-a536-6ae20911faa9-logs\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.319233 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceeca3bd-824e-4b51-a536-6ae20911faa9-logs\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.319460 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nf4b\" (UniqueName: \"kubernetes.io/projected/ceeca3bd-824e-4b51-a536-6ae20911faa9-kube-api-access-2nf4b\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.319490 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-config-data-custom\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.319589 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-config-data\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.319623 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-combined-ca-bundle\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.319681 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-internal-tls-certs\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.319696 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-public-tls-certs\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.319983 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceeca3bd-824e-4b51-a536-6ae20911faa9-logs\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.323478 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-public-tls-certs\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.324589 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-config-data-custom\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.325827 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-internal-tls-certs\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.326093 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-config-data\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.326175 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceeca3bd-824e-4b51-a536-6ae20911faa9-combined-ca-bundle\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.337177 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nf4b\" (UniqueName: \"kubernetes.io/projected/ceeca3bd-824e-4b51-a536-6ae20911faa9-kube-api-access-2nf4b\") pod \"barbican-api-56d59c75d6-qgl25\" (UID: \"ceeca3bd-824e-4b51-a536-6ae20911faa9\") " pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.432316 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.438601 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-597658645d-gglvr" event={"ID":"a9a14765-4f8e-445d-a260-28ce443609b8","Type":"ContainerStarted","Data":"5f8794c4da71451ad479612a585a9ca073406b54c3095ddd32946e061f90d6a0"} Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.444345 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55445887dc-67kkb" event={"ID":"8a809eec-ba73-4746-a976-e43e762c78c0","Type":"ContainerStarted","Data":"7162938e7d395fbe9054d488a494525f6b69a46062548f8abc4ab0b65b068dff"} Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.465014 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-55445887dc-67kkb" podStartSLOduration=3.153521286 podStartE2EDuration="4.464998511s" podCreationTimestamp="2026-02-04 11:46:46 +0000 UTC" firstStartedPulling="2026-02-04 11:46:47.580875221 +0000 UTC m=+1156.723579606" lastFinishedPulling="2026-02-04 11:46:48.892352436 +0000 UTC m=+1158.035056831" observedRunningTime="2026-02-04 11:46:50.460448331 +0000 UTC m=+1159.603152736" watchObservedRunningTime="2026-02-04 11:46:50.464998511 +0000 UTC m=+1159.607702896" Feb 04 11:46:50 crc kubenswrapper[4728]: I0204 11:46:50.911847 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56d59c75d6-qgl25"] Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.455324 4728 generic.go:334] "Generic (PLEG): container finished" podID="20df29d2-926c-4921-a7f2-eac948556d19" containerID="43a4bb62c8185cd8f14c958b8c82a3c1c6d6b5e4155022dae5323ea4c9d7ae26" exitCode=0 Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.455514 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20df29d2-926c-4921-a7f2-eac948556d19","Type":"ContainerDied","Data":"43a4bb62c8185cd8f14c958b8c82a3c1c6d6b5e4155022dae5323ea4c9d7ae26"} Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.457810 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56d59c75d6-qgl25" event={"ID":"ceeca3bd-824e-4b51-a536-6ae20911faa9","Type":"ContainerStarted","Data":"22a7f9e0f4c763d46a41cee6485c9447d36fcc0f8ec308c1f1ee2d33b27c8ffd"} Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.457845 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56d59c75d6-qgl25" event={"ID":"ceeca3bd-824e-4b51-a536-6ae20911faa9","Type":"ContainerStarted","Data":"be3edfb1044709aa545127105d9cce063719fe4693f7fbdf0783ba3ce9fbeb84"} Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.461332 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-597658645d-gglvr" event={"ID":"a9a14765-4f8e-445d-a260-28ce443609b8","Type":"ContainerStarted","Data":"8b8a794c12fc385176f46642f9a91e86bc2502dbe48143f7880a02590ec319a3"} Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.512006 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-597658645d-gglvr" podStartSLOduration=3.65908292 podStartE2EDuration="5.511979828s" podCreationTimestamp="2026-02-04 11:46:46 +0000 UTC" firstStartedPulling="2026-02-04 11:46:48.061666013 +0000 UTC m=+1157.204370398" lastFinishedPulling="2026-02-04 11:46:49.914562921 +0000 UTC m=+1159.057267306" observedRunningTime="2026-02-04 11:46:51.493350752 +0000 UTC m=+1160.636055137" watchObservedRunningTime="2026-02-04 11:46:51.511979828 +0000 UTC m=+1160.654684213" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.722770 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.846365 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-scripts\") pod \"20df29d2-926c-4921-a7f2-eac948556d19\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.846486 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqbk4\" (UniqueName: \"kubernetes.io/projected/20df29d2-926c-4921-a7f2-eac948556d19-kube-api-access-zqbk4\") pod \"20df29d2-926c-4921-a7f2-eac948556d19\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.846507 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-sg-core-conf-yaml\") pod \"20df29d2-926c-4921-a7f2-eac948556d19\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.846581 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-combined-ca-bundle\") pod \"20df29d2-926c-4921-a7f2-eac948556d19\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.846610 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-config-data\") pod \"20df29d2-926c-4921-a7f2-eac948556d19\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.846644 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-log-httpd\") pod \"20df29d2-926c-4921-a7f2-eac948556d19\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.846676 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-run-httpd\") pod \"20df29d2-926c-4921-a7f2-eac948556d19\" (UID: \"20df29d2-926c-4921-a7f2-eac948556d19\") " Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.847096 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20df29d2-926c-4921-a7f2-eac948556d19" (UID: "20df29d2-926c-4921-a7f2-eac948556d19"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.847111 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20df29d2-926c-4921-a7f2-eac948556d19" (UID: "20df29d2-926c-4921-a7f2-eac948556d19"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.851461 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20df29d2-926c-4921-a7f2-eac948556d19-kube-api-access-zqbk4" (OuterVolumeSpecName: "kube-api-access-zqbk4") pod "20df29d2-926c-4921-a7f2-eac948556d19" (UID: "20df29d2-926c-4921-a7f2-eac948556d19"). InnerVolumeSpecName "kube-api-access-zqbk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.855924 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-scripts" (OuterVolumeSpecName: "scripts") pod "20df29d2-926c-4921-a7f2-eac948556d19" (UID: "20df29d2-926c-4921-a7f2-eac948556d19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.875967 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20df29d2-926c-4921-a7f2-eac948556d19" (UID: "20df29d2-926c-4921-a7f2-eac948556d19"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.913156 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20df29d2-926c-4921-a7f2-eac948556d19" (UID: "20df29d2-926c-4921-a7f2-eac948556d19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.925520 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-config-data" (OuterVolumeSpecName: "config-data") pod "20df29d2-926c-4921-a7f2-eac948556d19" (UID: "20df29d2-926c-4921-a7f2-eac948556d19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.949585 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.949622 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.949636 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqbk4\" (UniqueName: \"kubernetes.io/projected/20df29d2-926c-4921-a7f2-eac948556d19-kube-api-access-zqbk4\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.949646 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.949654 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.949662 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df29d2-926c-4921-a7f2-eac948556d19-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:51 crc kubenswrapper[4728]: I0204 11:46:51.949672 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20df29d2-926c-4921-a7f2-eac948556d19-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.485372 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20df29d2-926c-4921-a7f2-eac948556d19","Type":"ContainerDied","Data":"9ba66c242b3ce64cfe2b488f2958742e515eece89b81db989e5045e66450729a"} Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.485683 4728 scope.go:117] "RemoveContainer" containerID="f6b2a561332c0e0dc85d4a235e6a79cf689fd8f73820e578d9b4086a5a297758" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.485600 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.492124 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56d59c75d6-qgl25" event={"ID":"ceeca3bd-824e-4b51-a536-6ae20911faa9","Type":"ContainerStarted","Data":"15c7933ce043d0034716d2fec96cba61339cf674def38089c35464fbfb1c5b19"} Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.513198 4728 scope.go:117] "RemoveContainer" containerID="c995ca026679f71df4605f78f2affc6d9d7c2b61493833bba6c82236d6a0c02f" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.526486 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56d59c75d6-qgl25" podStartSLOduration=2.526470219 podStartE2EDuration="2.526470219s" podCreationTimestamp="2026-02-04 11:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:46:52.525181428 +0000 UTC m=+1161.667885823" watchObservedRunningTime="2026-02-04 11:46:52.526470219 +0000 UTC m=+1161.669174604" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.545948 4728 scope.go:117] "RemoveContainer" containerID="43a4bb62c8185cd8f14c958b8c82a3c1c6d6b5e4155022dae5323ea4c9d7ae26" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.598377 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.635324 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.669522 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:46:52 crc kubenswrapper[4728]: E0204 11:46:52.669936 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="proxy-httpd" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.669948 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="proxy-httpd" Feb 04 11:46:52 crc kubenswrapper[4728]: E0204 11:46:52.669972 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="ceilometer-notification-agent" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.669979 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="ceilometer-notification-agent" Feb 04 11:46:52 crc kubenswrapper[4728]: E0204 11:46:52.669988 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="sg-core" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.669995 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="sg-core" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.670162 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="sg-core" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.670177 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="proxy-httpd" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.670194 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="20df29d2-926c-4921-a7f2-eac948556d19" containerName="ceilometer-notification-agent" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.671686 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.674674 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.675272 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.702570 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.774765 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-config-data\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.774812 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-scripts\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.774830 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.774876 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbx8p\" (UniqueName: \"kubernetes.io/projected/33fb50c4-1e12-45bc-ab0d-efd598f73530-kube-api-access-zbx8p\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.774917 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-log-httpd\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.774941 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-run-httpd\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.774966 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.876616 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-config-data\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.876664 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-scripts\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.876684 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.876728 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbx8p\" (UniqueName: \"kubernetes.io/projected/33fb50c4-1e12-45bc-ab0d-efd598f73530-kube-api-access-zbx8p\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.876801 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-log-httpd\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.876833 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-run-httpd\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.876867 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.877304 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-run-httpd\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.877501 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-log-httpd\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.881856 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.881933 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-scripts\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.884974 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.891002 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-config-data\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:52 crc kubenswrapper[4728]: I0204 11:46:52.893840 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbx8p\" (UniqueName: \"kubernetes.io/projected/33fb50c4-1e12-45bc-ab0d-efd598f73530-kube-api-access-zbx8p\") pod \"ceilometer-0\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " pod="openstack/ceilometer-0" Feb 04 11:46:53 crc kubenswrapper[4728]: I0204 11:46:53.016915 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:46:53 crc kubenswrapper[4728]: I0204 11:46:53.511352 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:46:53 crc kubenswrapper[4728]: W0204 11:46:53.513650 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33fb50c4_1e12_45bc_ab0d_efd598f73530.slice/crio-478a2d4269fbf1bde566a160d4a65fc15e6be3997065f2052bc4899fa02903c2 WatchSource:0}: Error finding container 478a2d4269fbf1bde566a160d4a65fc15e6be3997065f2052bc4899fa02903c2: Status 404 returned error can't find the container with id 478a2d4269fbf1bde566a160d4a65fc15e6be3997065f2052bc4899fa02903c2 Feb 04 11:46:53 crc kubenswrapper[4728]: I0204 11:46:53.516998 4728 generic.go:334] "Generic (PLEG): container finished" podID="2186aabd-28ff-488a-a224-01c14710adac" containerID="7106a66ab3aad23ff83211ffa2347875488e4e58c79e3fe0cb3d4223bea9d26e" exitCode=0 Feb 04 11:46:53 crc kubenswrapper[4728]: I0204 11:46:53.517999 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jx9lp" event={"ID":"2186aabd-28ff-488a-a224-01c14710adac","Type":"ContainerDied","Data":"7106a66ab3aad23ff83211ffa2347875488e4e58c79e3fe0cb3d4223bea9d26e"} Feb 04 11:46:53 crc kubenswrapper[4728]: I0204 11:46:53.518075 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:53 crc kubenswrapper[4728]: I0204 11:46:53.518113 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:46:53 crc kubenswrapper[4728]: I0204 11:46:53.570934 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20df29d2-926c-4921-a7f2-eac948556d19" path="/var/lib/kubelet/pods/20df29d2-926c-4921-a7f2-eac948556d19/volumes" Feb 04 11:46:54 crc kubenswrapper[4728]: I0204 11:46:54.527129 4728 generic.go:334] "Generic (PLEG): container finished" podID="d949b343-bfde-4d50-81b1-a7c66765c076" containerID="974db8f25fb6bef41dc01e78218ecc1a75f18ef547a5ac411dea800b4a63e201" exitCode=0 Feb 04 11:46:54 crc kubenswrapper[4728]: I0204 11:46:54.527218 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kgzbm" event={"ID":"d949b343-bfde-4d50-81b1-a7c66765c076","Type":"ContainerDied","Data":"974db8f25fb6bef41dc01e78218ecc1a75f18ef547a5ac411dea800b4a63e201"} Feb 04 11:46:54 crc kubenswrapper[4728]: I0204 11:46:54.529373 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33fb50c4-1e12-45bc-ab0d-efd598f73530","Type":"ContainerStarted","Data":"22fab38c866c88b6df250f55e01db7d6c7ab6d2ded1ef87a28ad231ded2f26ae"} Feb 04 11:46:54 crc kubenswrapper[4728]: I0204 11:46:54.529425 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33fb50c4-1e12-45bc-ab0d-efd598f73530","Type":"ContainerStarted","Data":"478a2d4269fbf1bde566a160d4a65fc15e6be3997065f2052bc4899fa02903c2"} Feb 04 11:46:54 crc kubenswrapper[4728]: I0204 11:46:54.865210 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jx9lp" Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.023190 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95mpp\" (UniqueName: \"kubernetes.io/projected/2186aabd-28ff-488a-a224-01c14710adac-kube-api-access-95mpp\") pod \"2186aabd-28ff-488a-a224-01c14710adac\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.023371 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-config-data\") pod \"2186aabd-28ff-488a-a224-01c14710adac\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.023424 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-combined-ca-bundle\") pod \"2186aabd-28ff-488a-a224-01c14710adac\" (UID: \"2186aabd-28ff-488a-a224-01c14710adac\") " Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.030270 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2186aabd-28ff-488a-a224-01c14710adac-kube-api-access-95mpp" (OuterVolumeSpecName: "kube-api-access-95mpp") pod "2186aabd-28ff-488a-a224-01c14710adac" (UID: "2186aabd-28ff-488a-a224-01c14710adac"). InnerVolumeSpecName "kube-api-access-95mpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.053884 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2186aabd-28ff-488a-a224-01c14710adac" (UID: "2186aabd-28ff-488a-a224-01c14710adac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.097610 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-config-data" (OuterVolumeSpecName: "config-data") pod "2186aabd-28ff-488a-a224-01c14710adac" (UID: "2186aabd-28ff-488a-a224-01c14710adac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.127338 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95mpp\" (UniqueName: \"kubernetes.io/projected/2186aabd-28ff-488a-a224-01c14710adac-kube-api-access-95mpp\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.127378 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.127391 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2186aabd-28ff-488a-a224-01c14710adac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.543225 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jx9lp" event={"ID":"2186aabd-28ff-488a-a224-01c14710adac","Type":"ContainerDied","Data":"42eb3509d3e4fdf4876c92c6908112f1b90a2bd4db5b299d3577678f46b3d149"} Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.543279 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42eb3509d3e4fdf4876c92c6908112f1b90a2bd4db5b299d3577678f46b3d149" Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.543303 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jx9lp" Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.547037 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33fb50c4-1e12-45bc-ab0d-efd598f73530","Type":"ContainerStarted","Data":"096ef98d022ff2a5f5d7244179343c64d61422dac7b8a5bb521bf80838aedf97"} Feb 04 11:46:55 crc kubenswrapper[4728]: I0204 11:46:55.898411 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.041564 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8whl\" (UniqueName: \"kubernetes.io/projected/d949b343-bfde-4d50-81b1-a7c66765c076-kube-api-access-b8whl\") pod \"d949b343-bfde-4d50-81b1-a7c66765c076\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.041747 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d949b343-bfde-4d50-81b1-a7c66765c076-etc-machine-id\") pod \"d949b343-bfde-4d50-81b1-a7c66765c076\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.041860 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-scripts\") pod \"d949b343-bfde-4d50-81b1-a7c66765c076\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.041885 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-db-sync-config-data\") pod \"d949b343-bfde-4d50-81b1-a7c66765c076\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.041894 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d949b343-bfde-4d50-81b1-a7c66765c076-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d949b343-bfde-4d50-81b1-a7c66765c076" (UID: "d949b343-bfde-4d50-81b1-a7c66765c076"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.041954 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-combined-ca-bundle\") pod \"d949b343-bfde-4d50-81b1-a7c66765c076\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.041999 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-config-data\") pod \"d949b343-bfde-4d50-81b1-a7c66765c076\" (UID: \"d949b343-bfde-4d50-81b1-a7c66765c076\") " Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.042406 4728 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d949b343-bfde-4d50-81b1-a7c66765c076-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.046548 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d949b343-bfde-4d50-81b1-a7c66765c076" (UID: "d949b343-bfde-4d50-81b1-a7c66765c076"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.046798 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d949b343-bfde-4d50-81b1-a7c66765c076-kube-api-access-b8whl" (OuterVolumeSpecName: "kube-api-access-b8whl") pod "d949b343-bfde-4d50-81b1-a7c66765c076" (UID: "d949b343-bfde-4d50-81b1-a7c66765c076"). InnerVolumeSpecName "kube-api-access-b8whl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.047426 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-scripts" (OuterVolumeSpecName: "scripts") pod "d949b343-bfde-4d50-81b1-a7c66765c076" (UID: "d949b343-bfde-4d50-81b1-a7c66765c076"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.072751 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d949b343-bfde-4d50-81b1-a7c66765c076" (UID: "d949b343-bfde-4d50-81b1-a7c66765c076"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.095900 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-config-data" (OuterVolumeSpecName: "config-data") pod "d949b343-bfde-4d50-81b1-a7c66765c076" (UID: "d949b343-bfde-4d50-81b1-a7c66765c076"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.144289 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.144339 4728 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.144352 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.144360 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d949b343-bfde-4d50-81b1-a7c66765c076-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.144369 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8whl\" (UniqueName: \"kubernetes.io/projected/d949b343-bfde-4d50-81b1-a7c66765c076-kube-api-access-b8whl\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.558295 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kgzbm" event={"ID":"d949b343-bfde-4d50-81b1-a7c66765c076","Type":"ContainerDied","Data":"5e7f5f04af80700c2ae7fb2bae85e6f4a7174d6e5a18ca58acd6760d84744b2c"} Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.558621 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e7f5f04af80700c2ae7fb2bae85e6f4a7174d6e5a18ca58acd6760d84744b2c" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.558656 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kgzbm" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.560588 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33fb50c4-1e12-45bc-ab0d-efd598f73530","Type":"ContainerStarted","Data":"5583b640a86080cac4452dbdf391a1c7944c1b79f306dc12b9f321301af6f94c"} Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.942321 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 11:46:56 crc kubenswrapper[4728]: E0204 11:46:56.946943 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d949b343-bfde-4d50-81b1-a7c66765c076" containerName="cinder-db-sync" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.946998 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d949b343-bfde-4d50-81b1-a7c66765c076" containerName="cinder-db-sync" Feb 04 11:46:56 crc kubenswrapper[4728]: E0204 11:46:56.947041 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2186aabd-28ff-488a-a224-01c14710adac" containerName="heat-db-sync" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.947050 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2186aabd-28ff-488a-a224-01c14710adac" containerName="heat-db-sync" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.947377 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2186aabd-28ff-488a-a224-01c14710adac" containerName="heat-db-sync" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.947412 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d949b343-bfde-4d50-81b1-a7c66765c076" containerName="cinder-db-sync" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.948548 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.954561 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.954803 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.955092 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bhbt8" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.955343 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 04 11:46:56 crc kubenswrapper[4728]: I0204 11:46:56.977712 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.017646 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6z8lr"] Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.017901 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" podUID="7d9f2a7f-3218-4705-862e-9ab115f73023" containerName="dnsmasq-dns" containerID="cri-o://27682c34e67c35ffee76a17d5bfe74d34b8600ee6b5a9fdddb9f06001a5955da" gracePeriod=10 Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.025255 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.060804 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.060904 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.061027 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.061118 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jc7\" (UniqueName: \"kubernetes.io/projected/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-kube-api-access-f6jc7\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.061170 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.061222 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.061404 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xwdgg"] Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.063255 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.078181 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xwdgg"] Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.153466 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.162708 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.162808 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.162837 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.162890 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.162917 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c99m4\" (UniqueName: \"kubernetes.io/projected/a47e2788-f585-4894-8e5b-e3b81fdafa60-kube-api-access-c99m4\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.162979 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.163014 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.163076 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6jc7\" (UniqueName: \"kubernetes.io/projected/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-kube-api-access-f6jc7\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.163112 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.163145 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-config\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.163173 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.163200 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.163902 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.166056 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.178946 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.180379 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.183217 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" podUID="7d9f2a7f-3218-4705-862e-9ab115f73023" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.159:5353: connect: connection refused" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.186391 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.191071 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6jc7\" (UniqueName: \"kubernetes.io/projected/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-kube-api-access-f6jc7\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.191472 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.193027 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.195080 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.267703 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-config\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.267756 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.267825 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.267850 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.267881 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.267898 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv5r8\" (UniqueName: \"kubernetes.io/projected/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-kube-api-access-hv5r8\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.267939 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.267958 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.267977 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c99m4\" (UniqueName: \"kubernetes.io/projected/a47e2788-f585-4894-8e5b-e3b81fdafa60-kube-api-access-c99m4\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.268016 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.268035 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.268062 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-scripts\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.268084 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-logs\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.269023 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-config\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.269474 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.269892 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.270130 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.270445 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.277596 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.298823 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c99m4\" (UniqueName: \"kubernetes.io/projected/a47e2788-f585-4894-8e5b-e3b81fdafa60-kube-api-access-c99m4\") pod \"dnsmasq-dns-5c9776ccc5-xwdgg\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.373118 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.373515 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-scripts\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.373562 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-logs\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.373703 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.373919 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv5r8\" (UniqueName: \"kubernetes.io/projected/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-kube-api-access-hv5r8\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.373970 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.373988 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.374258 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.376279 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-logs\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.382405 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.383144 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.387325 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-scripts\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.398553 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.399601 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv5r8\" (UniqueName: \"kubernetes.io/projected/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-kube-api-access-hv5r8\") pod \"cinder-api-0\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.579048 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.626164 4728 generic.go:334] "Generic (PLEG): container finished" podID="7d9f2a7f-3218-4705-862e-9ab115f73023" containerID="27682c34e67c35ffee76a17d5bfe74d34b8600ee6b5a9fdddb9f06001a5955da" exitCode=0 Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.630854 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" event={"ID":"7d9f2a7f-3218-4705-862e-9ab115f73023","Type":"ContainerDied","Data":"27682c34e67c35ffee76a17d5bfe74d34b8600ee6b5a9fdddb9f06001a5955da"} Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.655541 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.682962 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.796821 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-sb\") pod \"7d9f2a7f-3218-4705-862e-9ab115f73023\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.796909 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-svc\") pod \"7d9f2a7f-3218-4705-862e-9ab115f73023\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.796936 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-swift-storage-0\") pod \"7d9f2a7f-3218-4705-862e-9ab115f73023\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.796992 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-config\") pod \"7d9f2a7f-3218-4705-862e-9ab115f73023\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.797019 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-nb\") pod \"7d9f2a7f-3218-4705-862e-9ab115f73023\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.797048 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ccfq\" (UniqueName: \"kubernetes.io/projected/7d9f2a7f-3218-4705-862e-9ab115f73023-kube-api-access-5ccfq\") pod \"7d9f2a7f-3218-4705-862e-9ab115f73023\" (UID: \"7d9f2a7f-3218-4705-862e-9ab115f73023\") " Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.821000 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d9f2a7f-3218-4705-862e-9ab115f73023-kube-api-access-5ccfq" (OuterVolumeSpecName: "kube-api-access-5ccfq") pod "7d9f2a7f-3218-4705-862e-9ab115f73023" (UID: "7d9f2a7f-3218-4705-862e-9ab115f73023"). InnerVolumeSpecName "kube-api-access-5ccfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.893511 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7d9f2a7f-3218-4705-862e-9ab115f73023" (UID: "7d9f2a7f-3218-4705-862e-9ab115f73023"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.902041 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ccfq\" (UniqueName: \"kubernetes.io/projected/7d9f2a7f-3218-4705-862e-9ab115f73023-kube-api-access-5ccfq\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.902074 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.916722 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-config" (OuterVolumeSpecName: "config") pod "7d9f2a7f-3218-4705-862e-9ab115f73023" (UID: "7d9f2a7f-3218-4705-862e-9ab115f73023"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.928622 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7d9f2a7f-3218-4705-862e-9ab115f73023" (UID: "7d9f2a7f-3218-4705-862e-9ab115f73023"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.950450 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d9f2a7f-3218-4705-862e-9ab115f73023" (UID: "7d9f2a7f-3218-4705-862e-9ab115f73023"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:57 crc kubenswrapper[4728]: I0204 11:46:57.950831 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7d9f2a7f-3218-4705-862e-9ab115f73023" (UID: "7d9f2a7f-3218-4705-862e-9ab115f73023"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.010074 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.010109 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.010127 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.010141 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d9f2a7f-3218-4705-862e-9ab115f73023-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:46:58 crc kubenswrapper[4728]: W0204 11:46:58.072723 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b6cd57a_13cd_4e3c_a8e6_ea77a9dfc8e4.slice/crio-67c12c009e9647530ecf9e286e05aa51219864abe2fd074b6fb456367c166321 WatchSource:0}: Error finding container 67c12c009e9647530ecf9e286e05aa51219864abe2fd074b6fb456367c166321: Status 404 returned error can't find the container with id 67c12c009e9647530ecf9e286e05aa51219864abe2fd074b6fb456367c166321 Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.076166 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.394865 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.432493 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xwdgg"] Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.641889 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad","Type":"ContainerStarted","Data":"5cd07c7cbcf5e2177d5dc476f0fe1b4b9b729c0df3e7cf8b419ff60ec3585c7a"} Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.643470 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" event={"ID":"a47e2788-f585-4894-8e5b-e3b81fdafa60","Type":"ContainerStarted","Data":"a1ef2bd7a13a4645b4504fcae46249b6e82511553ac5e4de1614bf471a895b79"} Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.646131 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" event={"ID":"7d9f2a7f-3218-4705-862e-9ab115f73023","Type":"ContainerDied","Data":"9b10c31d1573e55ce8a0ba7fc30bf7af9b7bf76f7b2021cebfa0f2752f9bd8a2"} Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.646173 4728 scope.go:117] "RemoveContainer" containerID="27682c34e67c35ffee76a17d5bfe74d34b8600ee6b5a9fdddb9f06001a5955da" Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.646317 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6z8lr" Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.671291 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4","Type":"ContainerStarted","Data":"67c12c009e9647530ecf9e286e05aa51219864abe2fd074b6fb456367c166321"} Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.755879 4728 scope.go:117] "RemoveContainer" containerID="2c23284725bf6d2a084ca5584455ae82bec838ec39a993c9e153626379034265" Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.765018 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6z8lr"] Feb 04 11:46:58 crc kubenswrapper[4728]: I0204 11:46:58.779718 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6z8lr"] Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.412311 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.488412 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.586479 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d9f2a7f-3218-4705-862e-9ab115f73023" path="/var/lib/kubelet/pods/7d9f2a7f-3218-4705-862e-9ab115f73023/volumes" Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.728724 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad","Type":"ContainerStarted","Data":"173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14"} Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.757508 4728 generic.go:334] "Generic (PLEG): container finished" podID="a47e2788-f585-4894-8e5b-e3b81fdafa60" containerID="438b887b8e447c64fda07c84accd52f88b732f0dc62292893cc841aa1fc848b5" exitCode=0 Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.757605 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" event={"ID":"a47e2788-f585-4894-8e5b-e3b81fdafa60","Type":"ContainerDied","Data":"438b887b8e447c64fda07c84accd52f88b732f0dc62292893cc841aa1fc848b5"} Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.786496 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33fb50c4-1e12-45bc-ab0d-efd598f73530","Type":"ContainerStarted","Data":"9b36c470660c3301c9a77e90c6e0210b36528aec3b53563ae6bb0d67b321ef47"} Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.788295 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.873523 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.107163104 podStartE2EDuration="7.873499348s" podCreationTimestamp="2026-02-04 11:46:52 +0000 UTC" firstStartedPulling="2026-02-04 11:46:53.519862054 +0000 UTC m=+1162.662566439" lastFinishedPulling="2026-02-04 11:46:59.286198298 +0000 UTC m=+1168.428902683" observedRunningTime="2026-02-04 11:46:59.822270973 +0000 UTC m=+1168.964975358" watchObservedRunningTime="2026-02-04 11:46:59.873499348 +0000 UTC m=+1169.016203733" Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.919185 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-758cc66857-djf64"] Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.919458 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-758cc66857-djf64" podUID="f984d281-95b6-45be-abe8-d17370c22645" containerName="neutron-api" containerID="cri-o://0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b" gracePeriod=30 Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.919605 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-758cc66857-djf64" podUID="f984d281-95b6-45be-abe8-d17370c22645" containerName="neutron-httpd" containerID="cri-o://d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a" gracePeriod=30 Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.948432 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-758cc66857-djf64" podUID="f984d281-95b6-45be-abe8-d17370c22645" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": read tcp 10.217.0.2:39692->10.217.0.154:9696: read: connection reset by peer" Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.956464 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84679c4c57-hc428"] Feb 04 11:46:59 crc kubenswrapper[4728]: E0204 11:46:59.956915 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9f2a7f-3218-4705-862e-9ab115f73023" containerName="dnsmasq-dns" Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.956933 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9f2a7f-3218-4705-862e-9ab115f73023" containerName="dnsmasq-dns" Feb 04 11:46:59 crc kubenswrapper[4728]: E0204 11:46:59.956957 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9f2a7f-3218-4705-862e-9ab115f73023" containerName="init" Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.956965 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9f2a7f-3218-4705-862e-9ab115f73023" containerName="init" Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.957150 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d9f2a7f-3218-4705-862e-9ab115f73023" containerName="dnsmasq-dns" Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.963671 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:46:59 crc kubenswrapper[4728]: I0204 11:46:59.979678 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84679c4c57-hc428"] Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.063945 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9n2l\" (UniqueName: \"kubernetes.io/projected/58949fe9-f572-4c71-80c8-925cee89421e-kube-api-access-p9n2l\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.064016 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-config\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.064100 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-combined-ca-bundle\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.064207 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-internal-tls-certs\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.064257 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-ovndb-tls-certs\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.064310 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-httpd-config\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.064343 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-public-tls-certs\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.177360 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-combined-ca-bundle\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.177488 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-internal-tls-certs\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.177543 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-ovndb-tls-certs\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.177577 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-public-tls-certs\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.177597 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-httpd-config\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.177636 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9n2l\" (UniqueName: \"kubernetes.io/projected/58949fe9-f572-4c71-80c8-925cee89421e-kube-api-access-p9n2l\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.177733 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-config\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.186189 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-config\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.187506 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-public-tls-certs\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.202404 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-internal-tls-certs\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.203128 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-ovndb-tls-certs\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.203899 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-combined-ca-bundle\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.207853 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9n2l\" (UniqueName: \"kubernetes.io/projected/58949fe9-f572-4c71-80c8-925cee89421e-kube-api-access-p9n2l\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.209266 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58949fe9-f572-4c71-80c8-925cee89421e-httpd-config\") pod \"neutron-84679c4c57-hc428\" (UID: \"58949fe9-f572-4c71-80c8-925cee89421e\") " pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.280597 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.299172 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.458510 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.830567 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" event={"ID":"a47e2788-f585-4894-8e5b-e3b81fdafa60","Type":"ContainerStarted","Data":"0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2"} Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.831191 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.845919 4728 generic.go:334] "Generic (PLEG): container finished" podID="f984d281-95b6-45be-abe8-d17370c22645" containerID="d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a" exitCode=0 Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.846106 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-758cc66857-djf64" event={"ID":"f984d281-95b6-45be-abe8-d17370c22645","Type":"ContainerDied","Data":"d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a"} Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.863835 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4","Type":"ContainerStarted","Data":"a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94"} Feb 04 11:47:00 crc kubenswrapper[4728]: I0204 11:47:00.871540 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" podStartSLOduration=4.871522845 podStartE2EDuration="4.871522845s" podCreationTimestamp="2026-02-04 11:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:00.851155287 +0000 UTC m=+1169.993859672" watchObservedRunningTime="2026-02-04 11:47:00.871522845 +0000 UTC m=+1170.014227230" Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.034345 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84679c4c57-hc428"] Feb 04 11:47:01 crc kubenswrapper[4728]: W0204 11:47:01.095239 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58949fe9_f572_4c71_80c8_925cee89421e.slice/crio-b7ffadd3af75c46667bf522c65f193c11395f63076f9af5611cc976aebf04571 WatchSource:0}: Error finding container b7ffadd3af75c46667bf522c65f193c11395f63076f9af5611cc976aebf04571: Status 404 returned error can't find the container with id b7ffadd3af75c46667bf522c65f193c11395f63076f9af5611cc976aebf04571 Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.486205 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-758cc66857-djf64" podUID="f984d281-95b6-45be-abe8-d17370c22645" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.878958 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4","Type":"ContainerStarted","Data":"afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a"} Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.882993 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad","Type":"ContainerStarted","Data":"a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982"} Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.883080 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" containerName="cinder-api-log" containerID="cri-o://173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14" gracePeriod=30 Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.883299 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.883272 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" containerName="cinder-api" containerID="cri-o://a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982" gracePeriod=30 Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.889025 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84679c4c57-hc428" event={"ID":"58949fe9-f572-4c71-80c8-925cee89421e","Type":"ContainerStarted","Data":"835cf980ee6fbbe91f8143a0033309d3e1f331ab4a8672937e1990bf29901928"} Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.889097 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84679c4c57-hc428" event={"ID":"58949fe9-f572-4c71-80c8-925cee89421e","Type":"ContainerStarted","Data":"dfd25346cc7655b34a479f0a2c9cea44b4331069b5fb3055a43183751eb04b84"} Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.889112 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84679c4c57-hc428" event={"ID":"58949fe9-f572-4c71-80c8-925cee89421e","Type":"ContainerStarted","Data":"b7ffadd3af75c46667bf522c65f193c11395f63076f9af5611cc976aebf04571"} Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.889471 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.908849 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.021644047 podStartE2EDuration="5.908826661s" podCreationTimestamp="2026-02-04 11:46:56 +0000 UTC" firstStartedPulling="2026-02-04 11:46:58.084629282 +0000 UTC m=+1167.227333667" lastFinishedPulling="2026-02-04 11:46:58.971811896 +0000 UTC m=+1168.114516281" observedRunningTime="2026-02-04 11:47:01.902814198 +0000 UTC m=+1171.045518583" watchObservedRunningTime="2026-02-04 11:47:01.908826661 +0000 UTC m=+1171.051531046" Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.931017 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84679c4c57-hc428" podStartSLOduration=2.930996942 podStartE2EDuration="2.930996942s" podCreationTimestamp="2026-02-04 11:46:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:01.923957023 +0000 UTC m=+1171.066661408" watchObservedRunningTime="2026-02-04 11:47:01.930996942 +0000 UTC m=+1171.073701327" Feb 04 11:47:01 crc kubenswrapper[4728]: I0204 11:47:01.950039 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.950023826 podStartE2EDuration="4.950023826s" podCreationTimestamp="2026-02-04 11:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:01.949513815 +0000 UTC m=+1171.092218200" watchObservedRunningTime="2026-02-04 11:47:01.950023826 +0000 UTC m=+1171.092728211" Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.278254 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.709157 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.862976 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.956732 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56d59c75d6-qgl25" Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.961964 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data\") pod \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.962084 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv5r8\" (UniqueName: \"kubernetes.io/projected/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-kube-api-access-hv5r8\") pod \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.962175 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-etc-machine-id\") pod \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.962217 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data-custom\") pod \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.962247 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-logs\") pod \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.962289 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-scripts\") pod \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.962416 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-combined-ca-bundle\") pod \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\" (UID: \"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad\") " Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.962618 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" (UID: "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.963004 4728 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.963913 4728 generic.go:334] "Generic (PLEG): container finished" podID="0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" containerID="a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982" exitCode=0 Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.963934 4728 generic.go:334] "Generic (PLEG): container finished" podID="0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" containerID="173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14" exitCode=143 Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.964742 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.964909 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad","Type":"ContainerDied","Data":"a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982"} Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.964931 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad","Type":"ContainerDied","Data":"173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14"} Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.964941 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a9651c4-29a3-46ca-8b4d-6f134ffc86ad","Type":"ContainerDied","Data":"5cd07c7cbcf5e2177d5dc476f0fe1b4b9b729c0df3e7cf8b419ff60ec3585c7a"} Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.964954 4728 scope.go:117] "RemoveContainer" containerID="a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982" Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.973853 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-logs" (OuterVolumeSpecName: "logs") pod "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" (UID: "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.976004 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" (UID: "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:02 crc kubenswrapper[4728]: I0204 11:47:02.994918 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-scripts" (OuterVolumeSpecName: "scripts") pod "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" (UID: "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.016121 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-kube-api-access-hv5r8" (OuterVolumeSpecName: "kube-api-access-hv5r8") pod "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" (UID: "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad"). InnerVolumeSpecName "kube-api-access-hv5r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.047637 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6db974ddcd-24hrm"] Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.047883 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6db974ddcd-24hrm" podUID="7b4c4df9-ba39-445b-8a27-e5bb17be0079" containerName="barbican-api-log" containerID="cri-o://a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4" gracePeriod=30 Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.048328 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6db974ddcd-24hrm" podUID="7b4c4df9-ba39-445b-8a27-e5bb17be0079" containerName="barbican-api" containerID="cri-o://e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826" gracePeriod=30 Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.066561 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv5r8\" (UniqueName: \"kubernetes.io/projected/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-kube-api-access-hv5r8\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.066595 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.066604 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.066614 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.123571 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" (UID: "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.135026 4728 scope.go:117] "RemoveContainer" containerID="173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.177127 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.285031 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data" (OuterVolumeSpecName: "config-data") pod "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" (UID: "0a9651c4-29a3-46ca-8b4d-6f134ffc86ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.298012 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.390204 4728 scope.go:117] "RemoveContainer" containerID="a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982" Feb 04 11:47:03 crc kubenswrapper[4728]: E0204 11:47:03.390653 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982\": container with ID starting with a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982 not found: ID does not exist" containerID="a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.390688 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982"} err="failed to get container status \"a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982\": rpc error: code = NotFound desc = could not find container \"a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982\": container with ID starting with a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982 not found: ID does not exist" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.390710 4728 scope.go:117] "RemoveContainer" containerID="173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14" Feb 04 11:47:03 crc kubenswrapper[4728]: E0204 11:47:03.391086 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14\": container with ID starting with 173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14 not found: ID does not exist" containerID="173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.391115 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14"} err="failed to get container status \"173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14\": rpc error: code = NotFound desc = could not find container \"173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14\": container with ID starting with 173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14 not found: ID does not exist" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.391128 4728 scope.go:117] "RemoveContainer" containerID="a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.391536 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982"} err="failed to get container status \"a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982\": rpc error: code = NotFound desc = could not find container \"a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982\": container with ID starting with a00a90a3dc46406072426f8d5bc1c419644525ccea4d77938e4001417844d982 not found: ID does not exist" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.391561 4728 scope.go:117] "RemoveContainer" containerID="173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.391783 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14"} err="failed to get container status \"173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14\": rpc error: code = NotFound desc = could not find container \"173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14\": container with ID starting with 173f792b9f8d21d55bb4cafb4de45d813e4fedd3c88607043bd61605ac507b14 not found: ID does not exist" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.629356 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.655677 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.707127 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 04 11:47:03 crc kubenswrapper[4728]: E0204 11:47:03.707691 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" containerName="cinder-api" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.707708 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" containerName="cinder-api" Feb 04 11:47:03 crc kubenswrapper[4728]: E0204 11:47:03.707744 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" containerName="cinder-api-log" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.707764 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" containerName="cinder-api-log" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.707965 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" containerName="cinder-api" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.707975 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" containerName="cinder-api-log" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.708968 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.712507 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.713175 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.715995 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.727618 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.810217 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.810302 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.810423 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvkk5\" (UniqueName: \"kubernetes.io/projected/11d8f352-7a48-49a3-a5fc-91929d41faf8-kube-api-access-rvkk5\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.810470 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11d8f352-7a48-49a3-a5fc-91929d41faf8-logs\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.810506 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.810532 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-config-data\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.810554 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.810708 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-scripts\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.810831 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11d8f352-7a48-49a3-a5fc-91929d41faf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.913171 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkk5\" (UniqueName: \"kubernetes.io/projected/11d8f352-7a48-49a3-a5fc-91929d41faf8-kube-api-access-rvkk5\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.913258 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11d8f352-7a48-49a3-a5fc-91929d41faf8-logs\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.913316 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.913345 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-config-data\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.913367 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.913409 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-scripts\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.913441 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11d8f352-7a48-49a3-a5fc-91929d41faf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.913500 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.913572 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.913723 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11d8f352-7a48-49a3-a5fc-91929d41faf8-logs\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.913913 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11d8f352-7a48-49a3-a5fc-91929d41faf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.932838 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.933265 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-scripts\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.933568 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.933720 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-config-data\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.935984 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.937205 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d8f352-7a48-49a3-a5fc-91929d41faf8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.942333 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkk5\" (UniqueName: \"kubernetes.io/projected/11d8f352-7a48-49a3-a5fc-91929d41faf8-kube-api-access-rvkk5\") pod \"cinder-api-0\" (UID: \"11d8f352-7a48-49a3-a5fc-91929d41faf8\") " pod="openstack/cinder-api-0" Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.975697 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b4c4df9-ba39-445b-8a27-e5bb17be0079" containerID="a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4" exitCode=143 Feb 04 11:47:03 crc kubenswrapper[4728]: I0204 11:47:03.975796 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6db974ddcd-24hrm" event={"ID":"7b4c4df9-ba39-445b-8a27-e5bb17be0079","Type":"ContainerDied","Data":"a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4"} Feb 04 11:47:04 crc kubenswrapper[4728]: I0204 11:47:04.046071 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 04 11:47:04 crc kubenswrapper[4728]: I0204 11:47:04.707394 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 04 11:47:04 crc kubenswrapper[4728]: I0204 11:47:04.805588 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:47:04 crc kubenswrapper[4728]: I0204 11:47:04.822301 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:47:04 crc kubenswrapper[4728]: I0204 11:47:04.993180 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11d8f352-7a48-49a3-a5fc-91929d41faf8","Type":"ContainerStarted","Data":"f64177e52e73e79278762524551b61d2775af8e84d4ccda3b8ddc98b48a5f2f0"} Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.179690 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85f76984f4-b8kmh"] Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.181463 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.207196 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85f76984f4-b8kmh"] Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.239009 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-combined-ca-bundle\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.239091 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-scripts\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.239132 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-logs\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.239150 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-public-tls-certs\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.239204 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-internal-tls-certs\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.239236 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbt4\" (UniqueName: \"kubernetes.io/projected/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-kube-api-access-sfbt4\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.239303 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-config-data\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.341910 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-config-data\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.342203 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-combined-ca-bundle\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.342226 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-scripts\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.342252 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-logs\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.342268 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-public-tls-certs\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.342308 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-internal-tls-certs\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.342335 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfbt4\" (UniqueName: \"kubernetes.io/projected/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-kube-api-access-sfbt4\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.343864 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-logs\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.356215 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-scripts\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.367683 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-internal-tls-certs\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.371532 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-config-data\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.376164 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-combined-ca-bundle\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.382306 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-public-tls-certs\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.408333 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfbt4\" (UniqueName: \"kubernetes.io/projected/340bbc5d-d2d4-48cc-bf4b-6f2454d9819a-kube-api-access-sfbt4\") pod \"placement-85f76984f4-b8kmh\" (UID: \"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a\") " pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.454853 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.454899 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.454938 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.455559 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9955d85c683603107a50c1f93858af2076e1b6307f2485d080e9953f839e1ba"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.455606 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://c9955d85c683603107a50c1f93858af2076e1b6307f2485d080e9953f839e1ba" gracePeriod=600 Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.525448 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.566120 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a9651c4-29a3-46ca-8b4d-6f134ffc86ad" path="/var/lib/kubelet/pods/0a9651c4-29a3-46ca-8b4d-6f134ffc86ad/volumes" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.840540 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-758cc66857-djf64" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.962445 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6dqs\" (UniqueName: \"kubernetes.io/projected/f984d281-95b6-45be-abe8-d17370c22645-kube-api-access-d6dqs\") pod \"f984d281-95b6-45be-abe8-d17370c22645\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.962503 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-httpd-config\") pod \"f984d281-95b6-45be-abe8-d17370c22645\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.962583 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-combined-ca-bundle\") pod \"f984d281-95b6-45be-abe8-d17370c22645\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.962623 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-config\") pod \"f984d281-95b6-45be-abe8-d17370c22645\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.962659 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-internal-tls-certs\") pod \"f984d281-95b6-45be-abe8-d17370c22645\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.962722 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-ovndb-tls-certs\") pod \"f984d281-95b6-45be-abe8-d17370c22645\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.962753 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-public-tls-certs\") pod \"f984d281-95b6-45be-abe8-d17370c22645\" (UID: \"f984d281-95b6-45be-abe8-d17370c22645\") " Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.968224 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f984d281-95b6-45be-abe8-d17370c22645-kube-api-access-d6dqs" (OuterVolumeSpecName: "kube-api-access-d6dqs") pod "f984d281-95b6-45be-abe8-d17370c22645" (UID: "f984d281-95b6-45be-abe8-d17370c22645"). InnerVolumeSpecName "kube-api-access-d6dqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:05 crc kubenswrapper[4728]: I0204 11:47:05.968962 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f984d281-95b6-45be-abe8-d17370c22645" (UID: "f984d281-95b6-45be-abe8-d17370c22645"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.012086 4728 generic.go:334] "Generic (PLEG): container finished" podID="f984d281-95b6-45be-abe8-d17370c22645" containerID="0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b" exitCode=0 Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.012156 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-758cc66857-djf64" event={"ID":"f984d281-95b6-45be-abe8-d17370c22645","Type":"ContainerDied","Data":"0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b"} Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.012183 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-758cc66857-djf64" event={"ID":"f984d281-95b6-45be-abe8-d17370c22645","Type":"ContainerDied","Data":"66682683a5ffc570e5872a5b2984ad8dbe582591f87dc9a5bb420487e07754fb"} Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.012197 4728 scope.go:117] "RemoveContainer" containerID="d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.012352 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-758cc66857-djf64" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.017381 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11d8f352-7a48-49a3-a5fc-91929d41faf8","Type":"ContainerStarted","Data":"f0ac165ed2a8005aa42c8c6a48f3a6a9bf5cb0b28bfc52960ee1b87791572872"} Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.021377 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="c9955d85c683603107a50c1f93858af2076e1b6307f2485d080e9953f839e1ba" exitCode=0 Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.021421 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"c9955d85c683603107a50c1f93858af2076e1b6307f2485d080e9953f839e1ba"} Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.021631 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"df14b9397f5cab1fc5b2e7a5ea922d0337cd8f0ecd7c5a6f65afe229e61d080f"} Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.033782 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f984d281-95b6-45be-abe8-d17370c22645" (UID: "f984d281-95b6-45be-abe8-d17370c22645"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.038457 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f984d281-95b6-45be-abe8-d17370c22645" (UID: "f984d281-95b6-45be-abe8-d17370c22645"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.044828 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-config" (OuterVolumeSpecName: "config") pod "f984d281-95b6-45be-abe8-d17370c22645" (UID: "f984d281-95b6-45be-abe8-d17370c22645"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.051486 4728 scope.go:117] "RemoveContainer" containerID="0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.055160 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f984d281-95b6-45be-abe8-d17370c22645" (UID: "f984d281-95b6-45be-abe8-d17370c22645"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.066379 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6dqs\" (UniqueName: \"kubernetes.io/projected/f984d281-95b6-45be-abe8-d17370c22645-kube-api-access-d6dqs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.066567 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.066635 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.066732 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.066826 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.066903 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.088354 4728 scope.go:117] "RemoveContainer" containerID="d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a" Feb 04 11:47:06 crc kubenswrapper[4728]: E0204 11:47:06.088852 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a\": container with ID starting with d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a not found: ID does not exist" containerID="d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.088884 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a"} err="failed to get container status \"d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a\": rpc error: code = NotFound desc = could not find container \"d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a\": container with ID starting with d9bb9c53949f2d63b35357e90f85e501f77ca3ed42621fc98c315497f4952b9a not found: ID does not exist" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.088909 4728 scope.go:117] "RemoveContainer" containerID="0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b" Feb 04 11:47:06 crc kubenswrapper[4728]: E0204 11:47:06.089623 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b\": container with ID starting with 0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b not found: ID does not exist" containerID="0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.089660 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b"} err="failed to get container status \"0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b\": rpc error: code = NotFound desc = could not find container \"0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b\": container with ID starting with 0c0f4d5a4f1fe57dcbf9b2ee2af0538c41f7431f188d284c549b22ad10c3517b not found: ID does not exist" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.089681 4728 scope.go:117] "RemoveContainer" containerID="57f05c207a10ae4fedd99430e02b6ac5fa7f5bce4ce363fc2daec5fefcb4c117" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.103450 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f984d281-95b6-45be-abe8-d17370c22645" (UID: "f984d281-95b6-45be-abe8-d17370c22645"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.116640 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85f76984f4-b8kmh"] Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.169265 4728 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f984d281-95b6-45be-abe8-d17370c22645-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.363311 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-758cc66857-djf64"] Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.382742 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-758cc66857-djf64"] Feb 04 11:47:06 crc kubenswrapper[4728]: I0204 11:47:06.972562 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.034098 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11d8f352-7a48-49a3-a5fc-91929d41faf8","Type":"ContainerStarted","Data":"e21c0104db87c4b81d7e32485a1495775bb21d4368edbc299a50a51f7d6b8a38"} Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.035146 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.050256 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85f76984f4-b8kmh" event={"ID":"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a","Type":"ContainerStarted","Data":"17b050024601a3ccd9306c1ba1503df65b086dc0d2d9f12178a819aa7acdb071"} Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.050316 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85f76984f4-b8kmh" event={"ID":"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a","Type":"ContainerStarted","Data":"bf10a710f281b38ceddf187c5d4aa456eb3926d2297f8507218446724c550f39"} Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.050334 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85f76984f4-b8kmh" event={"ID":"340bbc5d-d2d4-48cc-bf4b-6f2454d9819a","Type":"ContainerStarted","Data":"d080be503a7e4f070450a553c9c7dd24dbc78933dbc68408a1d2b125bd2276c9"} Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.051129 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.051161 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.060889 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.060871728 podStartE2EDuration="4.060871728s" podCreationTimestamp="2026-02-04 11:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:07.052890107 +0000 UTC m=+1176.195594512" watchObservedRunningTime="2026-02-04 11:47:07.060871728 +0000 UTC m=+1176.203576113" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.065647 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b4c4df9-ba39-445b-8a27-e5bb17be0079" containerID="e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826" exitCode=0 Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.065694 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6db974ddcd-24hrm" event={"ID":"7b4c4df9-ba39-445b-8a27-e5bb17be0079","Type":"ContainerDied","Data":"e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826"} Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.065721 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6db974ddcd-24hrm" event={"ID":"7b4c4df9-ba39-445b-8a27-e5bb17be0079","Type":"ContainerDied","Data":"67b304a861df0a4940c6ec0b92810969466755e7a74840799c8627a0ffb32d6a"} Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.065742 4728 scope.go:117] "RemoveContainer" containerID="e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.065886 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6db974ddcd-24hrm" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.077941 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85f76984f4-b8kmh" podStartSLOduration=2.077919826 podStartE2EDuration="2.077919826s" podCreationTimestamp="2026-02-04 11:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:07.073553711 +0000 UTC m=+1176.216258096" watchObservedRunningTime="2026-02-04 11:47:07.077919826 +0000 UTC m=+1176.220624211" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.082556 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-combined-ca-bundle\") pod \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.082658 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bffxh\" (UniqueName: \"kubernetes.io/projected/7b4c4df9-ba39-445b-8a27-e5bb17be0079-kube-api-access-bffxh\") pod \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.082742 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data-custom\") pod \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.082790 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data\") pod \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.082832 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b4c4df9-ba39-445b-8a27-e5bb17be0079-logs\") pod \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\" (UID: \"7b4c4df9-ba39-445b-8a27-e5bb17be0079\") " Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.083840 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b4c4df9-ba39-445b-8a27-e5bb17be0079-logs" (OuterVolumeSpecName: "logs") pod "7b4c4df9-ba39-445b-8a27-e5bb17be0079" (UID: "7b4c4df9-ba39-445b-8a27-e5bb17be0079"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.084987 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b4c4df9-ba39-445b-8a27-e5bb17be0079-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.088019 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7b4c4df9-ba39-445b-8a27-e5bb17be0079" (UID: "7b4c4df9-ba39-445b-8a27-e5bb17be0079"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.088420 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4c4df9-ba39-445b-8a27-e5bb17be0079-kube-api-access-bffxh" (OuterVolumeSpecName: "kube-api-access-bffxh") pod "7b4c4df9-ba39-445b-8a27-e5bb17be0079" (UID: "7b4c4df9-ba39-445b-8a27-e5bb17be0079"). InnerVolumeSpecName "kube-api-access-bffxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.108960 4728 scope.go:117] "RemoveContainer" containerID="a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.113503 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b4c4df9-ba39-445b-8a27-e5bb17be0079" (UID: "7b4c4df9-ba39-445b-8a27-e5bb17be0079"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.130265 4728 scope.go:117] "RemoveContainer" containerID="e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826" Feb 04 11:47:07 crc kubenswrapper[4728]: E0204 11:47:07.130858 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826\": container with ID starting with e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826 not found: ID does not exist" containerID="e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.130899 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826"} err="failed to get container status \"e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826\": rpc error: code = NotFound desc = could not find container \"e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826\": container with ID starting with e1e4386321133571b1419d93bbfb194de11eacc93667d38b59a5f6d625925826 not found: ID does not exist" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.130924 4728 scope.go:117] "RemoveContainer" containerID="a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4" Feb 04 11:47:07 crc kubenswrapper[4728]: E0204 11:47:07.131144 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4\": container with ID starting with a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4 not found: ID does not exist" containerID="a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.131175 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4"} err="failed to get container status \"a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4\": rpc error: code = NotFound desc = could not find container \"a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4\": container with ID starting with a0c45d87365619cc3a2adcf73e0f01bde01b823af026f323e49426e450805ea4 not found: ID does not exist" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.146821 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data" (OuterVolumeSpecName: "config-data") pod "7b4c4df9-ba39-445b-8a27-e5bb17be0079" (UID: "7b4c4df9-ba39-445b-8a27-e5bb17be0079"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.187291 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.187335 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bffxh\" (UniqueName: \"kubernetes.io/projected/7b4c4df9-ba39-445b-8a27-e5bb17be0079-kube-api-access-bffxh\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.187350 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.187361 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4c4df9-ba39-445b-8a27-e5bb17be0079-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.420211 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6db974ddcd-24hrm"] Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.428862 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6db974ddcd-24hrm"] Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.497663 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.540624 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.573474 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b4c4df9-ba39-445b-8a27-e5bb17be0079" path="/var/lib/kubelet/pods/7b4c4df9-ba39-445b-8a27-e5bb17be0079/volumes" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.574363 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f984d281-95b6-45be-abe8-d17370c22645" path="/var/lib/kubelet/pods/f984d281-95b6-45be-abe8-d17370c22645/volumes" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.581485 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.640889 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9pjrv"] Feb 04 11:47:07 crc kubenswrapper[4728]: I0204 11:47:07.641183 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" podUID="248d137a-3361-4df7-b7b6-c26cfb6c5a6d" containerName="dnsmasq-dns" containerID="cri-o://75d1ad79fa9c745f131875b2de1237dc8b2878039885eb4ca4f721a5b0a6ed03" gracePeriod=10 Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.081401 4728 generic.go:334] "Generic (PLEG): container finished" podID="248d137a-3361-4df7-b7b6-c26cfb6c5a6d" containerID="75d1ad79fa9c745f131875b2de1237dc8b2878039885eb4ca4f721a5b0a6ed03" exitCode=0 Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.081467 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" event={"ID":"248d137a-3361-4df7-b7b6-c26cfb6c5a6d","Type":"ContainerDied","Data":"75d1ad79fa9c745f131875b2de1237dc8b2878039885eb4ca4f721a5b0a6ed03"} Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.083779 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" containerName="cinder-scheduler" containerID="cri-o://a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94" gracePeriod=30 Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.083960 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" containerName="probe" containerID="cri-o://afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a" gracePeriod=30 Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.178772 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.320856 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-swift-storage-0\") pod \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.320924 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-sb\") pod \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.321011 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bv2h\" (UniqueName: \"kubernetes.io/projected/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-kube-api-access-2bv2h\") pod \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.321044 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-nb\") pod \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.321084 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-svc\") pod \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.321272 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-config\") pod \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\" (UID: \"248d137a-3361-4df7-b7b6-c26cfb6c5a6d\") " Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.339965 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-kube-api-access-2bv2h" (OuterVolumeSpecName: "kube-api-access-2bv2h") pod "248d137a-3361-4df7-b7b6-c26cfb6c5a6d" (UID: "248d137a-3361-4df7-b7b6-c26cfb6c5a6d"). InnerVolumeSpecName "kube-api-access-2bv2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.382040 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "248d137a-3361-4df7-b7b6-c26cfb6c5a6d" (UID: "248d137a-3361-4df7-b7b6-c26cfb6c5a6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.386212 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "248d137a-3361-4df7-b7b6-c26cfb6c5a6d" (UID: "248d137a-3361-4df7-b7b6-c26cfb6c5a6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.388545 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-config" (OuterVolumeSpecName: "config") pod "248d137a-3361-4df7-b7b6-c26cfb6c5a6d" (UID: "248d137a-3361-4df7-b7b6-c26cfb6c5a6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.415265 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "248d137a-3361-4df7-b7b6-c26cfb6c5a6d" (UID: "248d137a-3361-4df7-b7b6-c26cfb6c5a6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.418611 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "248d137a-3361-4df7-b7b6-c26cfb6c5a6d" (UID: "248d137a-3361-4df7-b7b6-c26cfb6c5a6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.424091 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.424116 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.424128 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.424139 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bv2h\" (UniqueName: \"kubernetes.io/projected/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-kube-api-access-2bv2h\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.424149 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:08 crc kubenswrapper[4728]: I0204 11:47:08.424157 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/248d137a-3361-4df7-b7b6-c26cfb6c5a6d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.094417 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.095178 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-9pjrv" event={"ID":"248d137a-3361-4df7-b7b6-c26cfb6c5a6d","Type":"ContainerDied","Data":"75606e0925d72e1a953cb2ac5771226f3919f91bafaf7498742f8465c1aab130"} Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.095214 4728 scope.go:117] "RemoveContainer" containerID="75d1ad79fa9c745f131875b2de1237dc8b2878039885eb4ca4f721a5b0a6ed03" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.164549 4728 scope.go:117] "RemoveContainer" containerID="e9e77c5c6160459ec8ccd30b6f9951208542784c0ad592e2b2eb68b4a817bc41" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.180414 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9pjrv"] Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.189216 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9pjrv"] Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.564519 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="248d137a-3361-4df7-b7b6-c26cfb6c5a6d" path="/var/lib/kubelet/pods/248d137a-3361-4df7-b7b6-c26cfb6c5a6d/volumes" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.637244 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.755093 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-etc-machine-id\") pod \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.755171 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-combined-ca-bundle\") pod \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.755221 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6jc7\" (UniqueName: \"kubernetes.io/projected/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-kube-api-access-f6jc7\") pod \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.755295 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" (UID: "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.755348 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data-custom\") pod \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.755412 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-scripts\") pod \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.755486 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data\") pod \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\" (UID: \"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4\") " Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.755940 4728 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.762786 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-kube-api-access-f6jc7" (OuterVolumeSpecName: "kube-api-access-f6jc7") pod "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" (UID: "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4"). InnerVolumeSpecName "kube-api-access-f6jc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.762908 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" (UID: "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.764835 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-scripts" (OuterVolumeSpecName: "scripts") pod "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" (UID: "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.816021 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" (UID: "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.827197 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-74d87669cb-xsvws" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.857544 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.857578 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6jc7\" (UniqueName: \"kubernetes.io/projected/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-kube-api-access-f6jc7\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.857592 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.857604 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.905862 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data" (OuterVolumeSpecName: "config-data") pod "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" (UID: "7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:09 crc kubenswrapper[4728]: I0204 11:47:09.959022 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.107477 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" containerID="afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a" exitCode=0 Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.107542 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" containerID="a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94" exitCode=0 Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.107570 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.107574 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4","Type":"ContainerDied","Data":"afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a"} Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.107638 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4","Type":"ContainerDied","Data":"a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94"} Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.107650 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4","Type":"ContainerDied","Data":"67c12c009e9647530ecf9e286e05aa51219864abe2fd074b6fb456367c166321"} Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.107685 4728 scope.go:117] "RemoveContainer" containerID="afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.130690 4728 scope.go:117] "RemoveContainer" containerID="a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.144825 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.158359 4728 scope.go:117] "RemoveContainer" containerID="afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a" Feb 04 11:47:10 crc kubenswrapper[4728]: E0204 11:47:10.160832 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a\": container with ID starting with afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a not found: ID does not exist" containerID="afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.160871 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a"} err="failed to get container status \"afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a\": rpc error: code = NotFound desc = could not find container \"afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a\": container with ID starting with afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a not found: ID does not exist" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.160897 4728 scope.go:117] "RemoveContainer" containerID="a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94" Feb 04 11:47:10 crc kubenswrapper[4728]: E0204 11:47:10.161261 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94\": container with ID starting with a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94 not found: ID does not exist" containerID="a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.161315 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94"} err="failed to get container status \"a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94\": rpc error: code = NotFound desc = could not find container \"a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94\": container with ID starting with a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94 not found: ID does not exist" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.161350 4728 scope.go:117] "RemoveContainer" containerID="afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.161653 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a"} err="failed to get container status \"afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a\": rpc error: code = NotFound desc = could not find container \"afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a\": container with ID starting with afe0ff676a769e655c9b6146c5b718021892636246c6e64764af5e37613be16a not found: ID does not exist" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.161683 4728 scope.go:117] "RemoveContainer" containerID="a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.161973 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94"} err="failed to get container status \"a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94\": rpc error: code = NotFound desc = could not find container \"a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94\": container with ID starting with a66db6da339700380ab8e84bea785e935cce3cc1b5575606d9507b0fdeae1c94 not found: ID does not exist" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.168823 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.181727 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 11:47:10 crc kubenswrapper[4728]: E0204 11:47:10.182198 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f984d281-95b6-45be-abe8-d17370c22645" containerName="neutron-api" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182222 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f984d281-95b6-45be-abe8-d17370c22645" containerName="neutron-api" Feb 04 11:47:10 crc kubenswrapper[4728]: E0204 11:47:10.182237 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4c4df9-ba39-445b-8a27-e5bb17be0079" containerName="barbican-api" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182244 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4c4df9-ba39-445b-8a27-e5bb17be0079" containerName="barbican-api" Feb 04 11:47:10 crc kubenswrapper[4728]: E0204 11:47:10.182273 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="248d137a-3361-4df7-b7b6-c26cfb6c5a6d" containerName="init" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182279 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="248d137a-3361-4df7-b7b6-c26cfb6c5a6d" containerName="init" Feb 04 11:47:10 crc kubenswrapper[4728]: E0204 11:47:10.182286 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="248d137a-3361-4df7-b7b6-c26cfb6c5a6d" containerName="dnsmasq-dns" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182291 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="248d137a-3361-4df7-b7b6-c26cfb6c5a6d" containerName="dnsmasq-dns" Feb 04 11:47:10 crc kubenswrapper[4728]: E0204 11:47:10.182303 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" containerName="cinder-scheduler" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182309 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" containerName="cinder-scheduler" Feb 04 11:47:10 crc kubenswrapper[4728]: E0204 11:47:10.182321 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f984d281-95b6-45be-abe8-d17370c22645" containerName="neutron-httpd" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182327 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f984d281-95b6-45be-abe8-d17370c22645" containerName="neutron-httpd" Feb 04 11:47:10 crc kubenswrapper[4728]: E0204 11:47:10.182336 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4c4df9-ba39-445b-8a27-e5bb17be0079" containerName="barbican-api-log" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182342 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4c4df9-ba39-445b-8a27-e5bb17be0079" containerName="barbican-api-log" Feb 04 11:47:10 crc kubenswrapper[4728]: E0204 11:47:10.182384 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" containerName="probe" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182391 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" containerName="probe" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182543 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f984d281-95b6-45be-abe8-d17370c22645" containerName="neutron-api" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182554 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f984d281-95b6-45be-abe8-d17370c22645" containerName="neutron-httpd" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182563 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" containerName="cinder-scheduler" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182576 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="248d137a-3361-4df7-b7b6-c26cfb6c5a6d" containerName="dnsmasq-dns" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182587 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4c4df9-ba39-445b-8a27-e5bb17be0079" containerName="barbican-api" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182594 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4c4df9-ba39-445b-8a27-e5bb17be0079" containerName="barbican-api-log" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.182613 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" containerName="probe" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.183669 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.186045 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.196357 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.264042 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.264155 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdsw\" (UniqueName: \"kubernetes.io/projected/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-kube-api-access-gxdsw\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.264200 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-scripts\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.264380 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.264435 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.264470 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-config-data\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.365700 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-scripts\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.365969 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.366008 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.366030 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-config-data\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.366116 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.366131 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.366207 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxdsw\" (UniqueName: \"kubernetes.io/projected/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-kube-api-access-gxdsw\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.371395 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-scripts\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.372479 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.375525 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-config-data\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.382144 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.386707 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxdsw\" (UniqueName: \"kubernetes.io/projected/282fd0ba-8410-4d6f-bd7a-8715e0f9f8be-kube-api-access-gxdsw\") pod \"cinder-scheduler-0\" (UID: \"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be\") " pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.511547 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 04 11:47:10 crc kubenswrapper[4728]: I0204 11:47:10.982221 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 04 11:47:10 crc kubenswrapper[4728]: W0204 11:47:10.995253 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod282fd0ba_8410_4d6f_bd7a_8715e0f9f8be.slice/crio-d03dde6a982a8100dc305c67951b2e75709582cf9ee481adb47a28b795ac20c4 WatchSource:0}: Error finding container d03dde6a982a8100dc305c67951b2e75709582cf9ee481adb47a28b795ac20c4: Status 404 returned error can't find the container with id d03dde6a982a8100dc305c67951b2e75709582cf9ee481adb47a28b795ac20c4 Feb 04 11:47:11 crc kubenswrapper[4728]: I0204 11:47:11.126660 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be","Type":"ContainerStarted","Data":"d03dde6a982a8100dc305c67951b2e75709582cf9ee481adb47a28b795ac20c4"} Feb 04 11:47:11 crc kubenswrapper[4728]: I0204 11:47:11.571074 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4" path="/var/lib/kubelet/pods/7b6cd57a-13cd-4e3c-a8e6-ea77a9dfc8e4/volumes" Feb 04 11:47:12 crc kubenswrapper[4728]: I0204 11:47:12.154341 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be","Type":"ContainerStarted","Data":"94ae7ada7f4a628052534084d1966d24bd77b97f474e33ea7765c7bec7796e07"} Feb 04 11:47:13 crc kubenswrapper[4728]: I0204 11:47:13.164464 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"282fd0ba-8410-4d6f-bd7a-8715e0f9f8be","Type":"ContainerStarted","Data":"427cc9a24153f41b13339f7e6d214ee5d59a4f03c04d259f28cc730c70d7acec"} Feb 04 11:47:13 crc kubenswrapper[4728]: I0204 11:47:13.185633 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.185613234 podStartE2EDuration="3.185613234s" podCreationTimestamp="2026-02-04 11:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:13.183793521 +0000 UTC m=+1182.326497906" watchObservedRunningTime="2026-02-04 11:47:13.185613234 +0000 UTC m=+1182.328317619" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.707464 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.709780 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.722425 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.724130 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xjmb5" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.724330 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.724139 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.851442 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config-secret\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.851742 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g42gx\" (UniqueName: \"kubernetes.io/projected/84f425e3-ba15-437d-addf-aa2081f736b5-kube-api-access-g42gx\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.851937 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.852207 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.953859 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config-secret\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.953930 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g42gx\" (UniqueName: \"kubernetes.io/projected/84f425e3-ba15-437d-addf-aa2081f736b5-kube-api-access-g42gx\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.953963 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.954051 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.954871 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.961664 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.966466 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config-secret\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:14 crc kubenswrapper[4728]: I0204 11:47:14.976000 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g42gx\" (UniqueName: \"kubernetes.io/projected/84f425e3-ba15-437d-addf-aa2081f736b5-kube-api-access-g42gx\") pod \"openstackclient\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " pod="openstack/openstackclient" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.047499 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.380924 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-55db5db6dc-zsx22"] Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.386306 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.389168 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.390145 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.391050 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.392655 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55db5db6dc-zsx22"] Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.463295 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-config-data\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.464031 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-internal-tls-certs\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.464255 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-etc-swift\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.464409 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-log-httpd\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.464646 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-run-httpd\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.465106 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-combined-ca-bundle\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.465260 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-public-tls-certs\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.465427 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpsz\" (UniqueName: \"kubernetes.io/projected/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-kube-api-access-pxpsz\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.496897 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.497444 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="proxy-httpd" containerID="cri-o://9b36c470660c3301c9a77e90c6e0210b36528aec3b53563ae6bb0d67b321ef47" gracePeriod=30 Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.497529 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="sg-core" containerID="cri-o://5583b640a86080cac4452dbdf391a1c7944c1b79f306dc12b9f321301af6f94c" gracePeriod=30 Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.497587 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="ceilometer-notification-agent" containerID="cri-o://096ef98d022ff2a5f5d7244179343c64d61422dac7b8a5bb521bf80838aedf97" gracePeriod=30 Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.497889 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="ceilometer-central-agent" containerID="cri-o://22fab38c866c88b6df250f55e01db7d6c7ab6d2ded1ef87a28ad231ded2f26ae" gracePeriod=30 Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.512474 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.522375 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.566677 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-combined-ca-bundle\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.566945 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-public-tls-certs\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.566978 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxpsz\" (UniqueName: \"kubernetes.io/projected/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-kube-api-access-pxpsz\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.567028 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-config-data\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.567069 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-internal-tls-certs\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.567144 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-etc-swift\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.567172 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-log-httpd\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.567219 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-run-httpd\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.568114 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-run-httpd\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.568998 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-log-httpd\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.573978 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-config-data\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.576629 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-internal-tls-certs\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.577595 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-public-tls-certs\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.580070 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-etc-swift\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.582195 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-combined-ca-bundle\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.593456 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxpsz\" (UniqueName: \"kubernetes.io/projected/b0a1bd86-4ac9-4cc5-af9b-447cf553f266-kube-api-access-pxpsz\") pod \"swift-proxy-55db5db6dc-zsx22\" (UID: \"b0a1bd86-4ac9-4cc5-af9b-447cf553f266\") " pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.603949 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.162:3000/\": read tcp 10.217.0.2:60694->10.217.0.162:3000: read: connection reset by peer" Feb 04 11:47:15 crc kubenswrapper[4728]: I0204 11:47:15.707309 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:16 crc kubenswrapper[4728]: I0204 11:47:16.189184 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"84f425e3-ba15-437d-addf-aa2081f736b5","Type":"ContainerStarted","Data":"1b0a7ce7532a9516543838f21e7106fb6aabc3e4451726ce5aed6341b2513aa7"} Feb 04 11:47:16 crc kubenswrapper[4728]: I0204 11:47:16.191950 4728 generic.go:334] "Generic (PLEG): container finished" podID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerID="9b36c470660c3301c9a77e90c6e0210b36528aec3b53563ae6bb0d67b321ef47" exitCode=0 Feb 04 11:47:16 crc kubenswrapper[4728]: I0204 11:47:16.191980 4728 generic.go:334] "Generic (PLEG): container finished" podID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerID="5583b640a86080cac4452dbdf391a1c7944c1b79f306dc12b9f321301af6f94c" exitCode=2 Feb 04 11:47:16 crc kubenswrapper[4728]: I0204 11:47:16.191987 4728 generic.go:334] "Generic (PLEG): container finished" podID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerID="22fab38c866c88b6df250f55e01db7d6c7ab6d2ded1ef87a28ad231ded2f26ae" exitCode=0 Feb 04 11:47:16 crc kubenswrapper[4728]: I0204 11:47:16.191998 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33fb50c4-1e12-45bc-ab0d-efd598f73530","Type":"ContainerDied","Data":"9b36c470660c3301c9a77e90c6e0210b36528aec3b53563ae6bb0d67b321ef47"} Feb 04 11:47:16 crc kubenswrapper[4728]: I0204 11:47:16.192035 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33fb50c4-1e12-45bc-ab0d-efd598f73530","Type":"ContainerDied","Data":"5583b640a86080cac4452dbdf391a1c7944c1b79f306dc12b9f321301af6f94c"} Feb 04 11:47:16 crc kubenswrapper[4728]: I0204 11:47:16.192047 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33fb50c4-1e12-45bc-ab0d-efd598f73530","Type":"ContainerDied","Data":"22fab38c866c88b6df250f55e01db7d6c7ab6d2ded1ef87a28ad231ded2f26ae"} Feb 04 11:47:16 crc kubenswrapper[4728]: I0204 11:47:16.242529 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 04 11:47:16 crc kubenswrapper[4728]: I0204 11:47:16.283488 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55db5db6dc-zsx22"] Feb 04 11:47:17 crc kubenswrapper[4728]: I0204 11:47:17.203718 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55db5db6dc-zsx22" event={"ID":"b0a1bd86-4ac9-4cc5-af9b-447cf553f266","Type":"ContainerStarted","Data":"7eb45871c893582c38f00ba2c17b16df3c2939f607f3080182b4a0413e27d985"} Feb 04 11:47:17 crc kubenswrapper[4728]: I0204 11:47:17.204070 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55db5db6dc-zsx22" event={"ID":"b0a1bd86-4ac9-4cc5-af9b-447cf553f266","Type":"ContainerStarted","Data":"07f4fa62438399c26f8ef0766702dffdd0009fade7b7934c7a8e646a427c6fd8"} Feb 04 11:47:17 crc kubenswrapper[4728]: I0204 11:47:17.204083 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55db5db6dc-zsx22" event={"ID":"b0a1bd86-4ac9-4cc5-af9b-447cf553f266","Type":"ContainerStarted","Data":"0296a31a78044bf1d9079e9a2279cc1fcaea69fea0ff30a8aeb5f688c24a5ef9"} Feb 04 11:47:17 crc kubenswrapper[4728]: I0204 11:47:17.204348 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:17 crc kubenswrapper[4728]: I0204 11:47:17.204497 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:17 crc kubenswrapper[4728]: I0204 11:47:17.234034 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-55db5db6dc-zsx22" podStartSLOduration=2.234011347 podStartE2EDuration="2.234011347s" podCreationTimestamp="2026-02-04 11:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:17.224046589 +0000 UTC m=+1186.366750974" watchObservedRunningTime="2026-02-04 11:47:17.234011347 +0000 UTC m=+1186.376715732" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.295113 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7cd578fb67-gp57x"] Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.296704 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.300958 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.301277 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.301437 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-m2l7x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.314319 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7cd578fb67-gp57x"] Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.412223 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ldgrq"] Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.413793 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.440593 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ldgrq"] Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.456307 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data-custom\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.456439 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.456474 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcn8f\" (UniqueName: \"kubernetes.io/projected/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-kube-api-access-hcn8f\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.456510 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-combined-ca-bundle\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.467097 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-86487d557f-7kzbg"] Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.468799 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.471300 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.483615 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-86487d557f-7kzbg"] Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.508471 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-68d54cf779-mw5kw"] Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.509617 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.512632 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.526667 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-68d54cf779-mw5kw"] Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.557623 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.557680 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data-custom\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.557729 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-config\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.557780 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.557817 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.557848 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcn8f\" (UniqueName: \"kubernetes.io/projected/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-kube-api-access-hcn8f\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.557870 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw86f\" (UniqueName: \"kubernetes.io/projected/4fd56842-4788-44ba-952e-76b8808474a6-kube-api-access-cw86f\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.557899 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-combined-ca-bundle\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.557916 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qzz\" (UniqueName: \"kubernetes.io/projected/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-kube-api-access-n9qzz\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.557956 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-combined-ca-bundle\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.557984 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.558003 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.558032 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data-custom\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.558081 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.565441 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.565736 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-combined-ca-bundle\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.569449 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data-custom\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.579612 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcn8f\" (UniqueName: \"kubernetes.io/projected/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-kube-api-access-hcn8f\") pod \"heat-engine-7cd578fb67-gp57x\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.621338 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660003 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660055 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660074 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data-custom\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660112 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-config\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660136 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660206 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-combined-ca-bundle\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660230 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw86f\" (UniqueName: \"kubernetes.io/projected/4fd56842-4788-44ba-952e-76b8808474a6-kube-api-access-cw86f\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660256 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9qzz\" (UniqueName: \"kubernetes.io/projected/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-kube-api-access-n9qzz\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660304 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlkf9\" (UniqueName: \"kubernetes.io/projected/a07d0233-6135-4630-90b5-3a8c3b7d2a97-kube-api-access-tlkf9\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660323 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-combined-ca-bundle\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660351 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660370 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660386 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.660435 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data-custom\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.662787 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.662893 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.664241 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.665375 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.666637 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.667092 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-config\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.669340 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data-custom\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.681707 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9qzz\" (UniqueName: \"kubernetes.io/projected/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-kube-api-access-n9qzz\") pod \"dnsmasq-dns-7756b9d78c-ldgrq\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.693092 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw86f\" (UniqueName: \"kubernetes.io/projected/4fd56842-4788-44ba-952e-76b8808474a6-kube-api-access-cw86f\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.694008 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-combined-ca-bundle\") pod \"heat-cfnapi-86487d557f-7kzbg\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.745589 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.762540 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlkf9\" (UniqueName: \"kubernetes.io/projected/a07d0233-6135-4630-90b5-3a8c3b7d2a97-kube-api-access-tlkf9\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.762608 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.762641 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data-custom\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.762789 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-combined-ca-bundle\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.766645 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data-custom\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.768277 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.770646 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-combined-ca-bundle\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.782411 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlkf9\" (UniqueName: \"kubernetes.io/projected/a07d0233-6135-4630-90b5-3a8c3b7d2a97-kube-api-access-tlkf9\") pod \"heat-api-68d54cf779-mw5kw\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.791514 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:19 crc kubenswrapper[4728]: I0204 11:47:19.843968 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.249166 4728 generic.go:334] "Generic (PLEG): container finished" podID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerID="096ef98d022ff2a5f5d7244179343c64d61422dac7b8a5bb521bf80838aedf97" exitCode=0 Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.249206 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33fb50c4-1e12-45bc-ab0d-efd598f73530","Type":"ContainerDied","Data":"096ef98d022ff2a5f5d7244179343c64d61422dac7b8a5bb521bf80838aedf97"} Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.284128 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7cd578fb67-gp57x"] Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.433009 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.538486 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ldgrq"] Feb 04 11:47:20 crc kubenswrapper[4728]: W0204 11:47:20.549688 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf06dc94f_182a_4b89_92c4_bb86ea7b4a0b.slice/crio-8f7b371abd2da64ec40fcbe8b98cef76683cd130cd92b40766f1b1da31d1157e WatchSource:0}: Error finding container 8f7b371abd2da64ec40fcbe8b98cef76683cd130cd92b40766f1b1da31d1157e: Status 404 returned error can't find the container with id 8f7b371abd2da64ec40fcbe8b98cef76683cd130cd92b40766f1b1da31d1157e Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.589308 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-scripts\") pod \"33fb50c4-1e12-45bc-ab0d-efd598f73530\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.589451 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-sg-core-conf-yaml\") pod \"33fb50c4-1e12-45bc-ab0d-efd598f73530\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.589496 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-combined-ca-bundle\") pod \"33fb50c4-1e12-45bc-ab0d-efd598f73530\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.589524 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-config-data\") pod \"33fb50c4-1e12-45bc-ab0d-efd598f73530\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.589626 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-log-httpd\") pod \"33fb50c4-1e12-45bc-ab0d-efd598f73530\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.589666 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbx8p\" (UniqueName: \"kubernetes.io/projected/33fb50c4-1e12-45bc-ab0d-efd598f73530-kube-api-access-zbx8p\") pod \"33fb50c4-1e12-45bc-ab0d-efd598f73530\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.589717 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-run-httpd\") pod \"33fb50c4-1e12-45bc-ab0d-efd598f73530\" (UID: \"33fb50c4-1e12-45bc-ab0d-efd598f73530\") " Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.590881 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "33fb50c4-1e12-45bc-ab0d-efd598f73530" (UID: "33fb50c4-1e12-45bc-ab0d-efd598f73530"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.598409 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-scripts" (OuterVolumeSpecName: "scripts") pod "33fb50c4-1e12-45bc-ab0d-efd598f73530" (UID: "33fb50c4-1e12-45bc-ab0d-efd598f73530"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.598564 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "33fb50c4-1e12-45bc-ab0d-efd598f73530" (UID: "33fb50c4-1e12-45bc-ab0d-efd598f73530"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.608437 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fb50c4-1e12-45bc-ab0d-efd598f73530-kube-api-access-zbx8p" (OuterVolumeSpecName: "kube-api-access-zbx8p") pod "33fb50c4-1e12-45bc-ab0d-efd598f73530" (UID: "33fb50c4-1e12-45bc-ab0d-efd598f73530"). InnerVolumeSpecName "kube-api-access-zbx8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.670597 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "33fb50c4-1e12-45bc-ab0d-efd598f73530" (UID: "33fb50c4-1e12-45bc-ab0d-efd598f73530"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.695659 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbx8p\" (UniqueName: \"kubernetes.io/projected/33fb50c4-1e12-45bc-ab0d-efd598f73530-kube-api-access-zbx8p\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.695694 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.695703 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.695710 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.695718 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33fb50c4-1e12-45bc-ab0d-efd598f73530-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.712170 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-68d54cf779-mw5kw"] Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.721642 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-86487d557f-7kzbg"] Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.755597 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-config-data" (OuterVolumeSpecName: "config-data") pod "33fb50c4-1e12-45bc-ab0d-efd598f73530" (UID: "33fb50c4-1e12-45bc-ab0d-efd598f73530"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.783187 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33fb50c4-1e12-45bc-ab0d-efd598f73530" (UID: "33fb50c4-1e12-45bc-ab0d-efd598f73530"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.789683 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.798428 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:20 crc kubenswrapper[4728]: I0204 11:47:20.798460 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fb50c4-1e12-45bc-ab0d-efd598f73530-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.274834 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33fb50c4-1e12-45bc-ab0d-efd598f73530","Type":"ContainerDied","Data":"478a2d4269fbf1bde566a160d4a65fc15e6be3997065f2052bc4899fa02903c2"} Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.275150 4728 scope.go:117] "RemoveContainer" containerID="9b36c470660c3301c9a77e90c6e0210b36528aec3b53563ae6bb0d67b321ef47" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.274868 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.283507 4728 generic.go:334] "Generic (PLEG): container finished" podID="f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" containerID="92bd4a3fdd5a184665059b892cfc5ac60136cac2728f3125e1bb419a4348e139" exitCode=0 Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.283568 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" event={"ID":"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b","Type":"ContainerDied","Data":"92bd4a3fdd5a184665059b892cfc5ac60136cac2728f3125e1bb419a4348e139"} Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.283593 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" event={"ID":"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b","Type":"ContainerStarted","Data":"8f7b371abd2da64ec40fcbe8b98cef76683cd130cd92b40766f1b1da31d1157e"} Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.294016 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cd578fb67-gp57x" event={"ID":"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6","Type":"ContainerStarted","Data":"d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059"} Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.294054 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cd578fb67-gp57x" event={"ID":"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6","Type":"ContainerStarted","Data":"5356fead71eb0ba1ec63d96598a5402907695f1379c4014e6952dd3aca5f1d92"} Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.294824 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.299397 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68d54cf779-mw5kw" event={"ID":"a07d0233-6135-4630-90b5-3a8c3b7d2a97","Type":"ContainerStarted","Data":"815cd564e4b69d669ea84e55f2da4be6669d2c9846a79ed70a6b325bc8400f05"} Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.321899 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86487d557f-7kzbg" event={"ID":"4fd56842-4788-44ba-952e-76b8808474a6","Type":"ContainerStarted","Data":"7b734b22de7edef82592ccf47efacade37d409892f77ba7df26e8f45fd67d4ec"} Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.348394 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.355081 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.368987 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:21 crc kubenswrapper[4728]: E0204 11:47:21.369502 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="ceilometer-notification-agent" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.369597 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="ceilometer-notification-agent" Feb 04 11:47:21 crc kubenswrapper[4728]: E0204 11:47:21.369656 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="proxy-httpd" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.369729 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="proxy-httpd" Feb 04 11:47:21 crc kubenswrapper[4728]: E0204 11:47:21.369945 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="ceilometer-central-agent" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.370020 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="ceilometer-central-agent" Feb 04 11:47:21 crc kubenswrapper[4728]: E0204 11:47:21.370117 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="sg-core" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.370202 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="sg-core" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.370493 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="proxy-httpd" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.370576 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="ceilometer-central-agent" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.370654 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="ceilometer-notification-agent" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.370741 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" containerName="sg-core" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.371829 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7cd578fb67-gp57x" podStartSLOduration=2.371819241 podStartE2EDuration="2.371819241s" podCreationTimestamp="2026-02-04 11:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:21.350422578 +0000 UTC m=+1190.493126973" watchObservedRunningTime="2026-02-04 11:47:21.371819241 +0000 UTC m=+1190.514523626" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.372670 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.377622 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.378024 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.381695 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.530164 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.530231 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-scripts\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.530322 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-run-httpd\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.530411 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.530448 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jcp4\" (UniqueName: \"kubernetes.io/projected/730b4d31-76aa-48af-b5a5-44d29830cb54-kube-api-access-7jcp4\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.530509 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-config-data\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.530536 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-log-httpd\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.583492 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33fb50c4-1e12-45bc-ab0d-efd598f73530" path="/var/lib/kubelet/pods/33fb50c4-1e12-45bc-ab0d-efd598f73530/volumes" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.632389 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-run-httpd\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.632503 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.632543 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jcp4\" (UniqueName: \"kubernetes.io/projected/730b4d31-76aa-48af-b5a5-44d29830cb54-kube-api-access-7jcp4\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.632603 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-config-data\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.632633 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-log-httpd\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.632740 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.632800 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-scripts\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.633476 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-run-httpd\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.636705 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-log-httpd\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.640299 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.641696 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.642077 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-config-data\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.651418 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-scripts\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.661252 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jcp4\" (UniqueName: \"kubernetes.io/projected/730b4d31-76aa-48af-b5a5-44d29830cb54-kube-api-access-7jcp4\") pod \"ceilometer-0\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " pod="openstack/ceilometer-0" Feb 04 11:47:21 crc kubenswrapper[4728]: I0204 11:47:21.759611 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:47:24 crc kubenswrapper[4728]: I0204 11:47:24.631905 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.537801 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7bd84699b9-9ldwf"] Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.546818 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.572683 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7bd84699b9-9ldwf"] Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.584503 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-84d95cd6d8-jk9zt"] Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.585989 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.617564 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47ec0af4-1025-4dec-9270-86a8ad62ba47-config-data-custom\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.617602 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data-custom\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.617639 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47ec0af4-1025-4dec-9270-86a8ad62ba47-config-data\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.617655 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgr4z\" (UniqueName: \"kubernetes.io/projected/235ea075-2aaf-4f43-a38a-83f118af4592-kube-api-access-tgr4z\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.617681 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ec0af4-1025-4dec-9270-86a8ad62ba47-combined-ca-bundle\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.617736 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.617773 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-combined-ca-bundle\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.617885 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6t7h\" (UniqueName: \"kubernetes.io/projected/47ec0af4-1025-4dec-9270-86a8ad62ba47-kube-api-access-d6t7h\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.624816 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-647757f45c-fclw2"] Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.626727 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.638647 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84d95cd6d8-jk9zt"] Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.651173 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-647757f45c-fclw2"] Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.660062 4728 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod2186aabd-28ff-488a-a224-01c14710adac"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod2186aabd-28ff-488a-a224-01c14710adac] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2186aabd_28ff_488a_a224_01c14710adac.slice" Feb 04 11:47:25 crc kubenswrapper[4728]: E0204 11:47:25.660121 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod2186aabd-28ff-488a-a224-01c14710adac] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod2186aabd-28ff-488a-a224-01c14710adac] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2186aabd_28ff_488a_a224_01c14710adac.slice" pod="openstack/heat-db-sync-jx9lp" podUID="2186aabd-28ff-488a-a224-01c14710adac" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.716429 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.719772 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55db5db6dc-zsx22" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.720045 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47ec0af4-1025-4dec-9270-86a8ad62ba47-config-data\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.720091 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgr4z\" (UniqueName: \"kubernetes.io/projected/235ea075-2aaf-4f43-a38a-83f118af4592-kube-api-access-tgr4z\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.720127 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ec0af4-1025-4dec-9270-86a8ad62ba47-combined-ca-bundle\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.720157 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data-custom\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.722192 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-combined-ca-bundle\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.722238 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.722401 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-combined-ca-bundle\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.722436 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.722627 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6t7h\" (UniqueName: \"kubernetes.io/projected/47ec0af4-1025-4dec-9270-86a8ad62ba47-kube-api-access-d6t7h\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.722726 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj2hc\" (UniqueName: \"kubernetes.io/projected/4a48c806-c596-4c79-8b6a-123a94b9f557-kube-api-access-lj2hc\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.722779 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47ec0af4-1025-4dec-9270-86a8ad62ba47-config-data-custom\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.722807 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data-custom\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.729333 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47ec0af4-1025-4dec-9270-86a8ad62ba47-config-data\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.729354 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47ec0af4-1025-4dec-9270-86a8ad62ba47-config-data-custom\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.733917 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.733943 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data-custom\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.737545 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-combined-ca-bundle\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.743590 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgr4z\" (UniqueName: \"kubernetes.io/projected/235ea075-2aaf-4f43-a38a-83f118af4592-kube-api-access-tgr4z\") pod \"heat-cfnapi-84d95cd6d8-jk9zt\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.744585 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6t7h\" (UniqueName: \"kubernetes.io/projected/47ec0af4-1025-4dec-9270-86a8ad62ba47-kube-api-access-d6t7h\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.747859 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ec0af4-1025-4dec-9270-86a8ad62ba47-combined-ca-bundle\") pod \"heat-engine-7bd84699b9-9ldwf\" (UID: \"47ec0af4-1025-4dec-9270-86a8ad62ba47\") " pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.827839 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj2hc\" (UniqueName: \"kubernetes.io/projected/4a48c806-c596-4c79-8b6a-123a94b9f557-kube-api-access-lj2hc\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.827917 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data-custom\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.827961 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-combined-ca-bundle\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.828001 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.840422 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.846306 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj2hc\" (UniqueName: \"kubernetes.io/projected/4a48c806-c596-4c79-8b6a-123a94b9f557-kube-api-access-lj2hc\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.860607 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data-custom\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.877013 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.880999 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-combined-ca-bundle\") pod \"heat-api-647757f45c-fclw2\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.922214 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:25 crc kubenswrapper[4728]: I0204 11:47:25.955384 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:26 crc kubenswrapper[4728]: I0204 11:47:26.371070 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jx9lp" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.012099 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-68d54cf779-mw5kw"] Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.035151 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-86487d557f-7kzbg"] Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.053242 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6bf54fd9cd-l9msv"] Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.054476 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.057729 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.058027 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.073465 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-546d7984c6-n6fdl"] Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.075052 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.082066 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.082294 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.096844 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-546d7984c6-n6fdl"] Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.103508 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bf54fd9cd-l9msv"] Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252131 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-combined-ca-bundle\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252207 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvs2g\" (UniqueName: \"kubernetes.io/projected/5a350ee5-d239-42fc-9665-b07c506eb400-kube-api-access-pvs2g\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252263 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-public-tls-certs\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252295 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-config-data\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252503 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-config-data\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252677 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-internal-tls-certs\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252717 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xldjl\" (UniqueName: \"kubernetes.io/projected/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-kube-api-access-xldjl\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252740 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-internal-tls-certs\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252831 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-config-data-custom\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252859 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-public-tls-certs\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252885 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-combined-ca-bundle\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.252979 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-config-data-custom\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.353923 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-config-data\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.353990 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-internal-tls-certs\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.354008 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xldjl\" (UniqueName: \"kubernetes.io/projected/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-kube-api-access-xldjl\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.354028 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-internal-tls-certs\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.354049 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-config-data-custom\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.354064 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-public-tls-certs\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.354080 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-combined-ca-bundle\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.354106 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-config-data-custom\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.354133 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-combined-ca-bundle\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.354151 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvs2g\" (UniqueName: \"kubernetes.io/projected/5a350ee5-d239-42fc-9665-b07c506eb400-kube-api-access-pvs2g\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.354193 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-public-tls-certs\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.354224 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-config-data\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.362875 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-internal-tls-certs\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.363047 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-combined-ca-bundle\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.363550 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-public-tls-certs\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.363689 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-config-data\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.364201 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-config-data\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.364248 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-public-tls-certs\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.365688 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a350ee5-d239-42fc-9665-b07c506eb400-config-data-custom\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.366521 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-config-data-custom\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.367936 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-internal-tls-certs\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.374119 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-combined-ca-bundle\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.375037 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xldjl\" (UniqueName: \"kubernetes.io/projected/eec7f6cd-431d-4a8f-8850-27aeb6a18f37-kube-api-access-xldjl\") pod \"heat-api-546d7984c6-n6fdl\" (UID: \"eec7f6cd-431d-4a8f-8850-27aeb6a18f37\") " pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.377505 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvs2g\" (UniqueName: \"kubernetes.io/projected/5a350ee5-d239-42fc-9665-b07c506eb400-kube-api-access-pvs2g\") pod \"heat-cfnapi-6bf54fd9cd-l9msv\" (UID: \"5a350ee5-d239-42fc-9665-b07c506eb400\") " pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.379276 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:27 crc kubenswrapper[4728]: I0204 11:47:27.413020 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:28 crc kubenswrapper[4728]: I0204 11:47:28.238716 4728 scope.go:117] "RemoveContainer" containerID="5583b640a86080cac4452dbdf391a1c7944c1b79f306dc12b9f321301af6f94c" Feb 04 11:47:28 crc kubenswrapper[4728]: I0204 11:47:28.758431 4728 scope.go:117] "RemoveContainer" containerID="096ef98d022ff2a5f5d7244179343c64d61422dac7b8a5bb521bf80838aedf97" Feb 04 11:47:28 crc kubenswrapper[4728]: I0204 11:47:28.873925 4728 scope.go:117] "RemoveContainer" containerID="22fab38c866c88b6df250f55e01db7d6c7ab6d2ded1ef87a28ad231ded2f26ae" Feb 04 11:47:28 crc kubenswrapper[4728]: I0204 11:47:28.898071 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:29 crc kubenswrapper[4728]: W0204 11:47:29.068906 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a350ee5_d239_42fc_9665_b07c506eb400.slice/crio-b15d5d168be42972513e01a79c1d7d5f5269ee0336a28488c50d3801f0dfb7e6 WatchSource:0}: Error finding container b15d5d168be42972513e01a79c1d7d5f5269ee0336a28488c50d3801f0dfb7e6: Status 404 returned error can't find the container with id b15d5d168be42972513e01a79c1d7d5f5269ee0336a28488c50d3801f0dfb7e6 Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.069869 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bf54fd9cd-l9msv"] Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.233977 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-546d7984c6-n6fdl"] Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.248668 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7bd84699b9-9ldwf"] Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.255834 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-647757f45c-fclw2"] Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.437325 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84d95cd6d8-jk9zt"] Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.577870 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.577910 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"84f425e3-ba15-437d-addf-aa2081f736b5","Type":"ContainerStarted","Data":"3326601329e1332a3e753bd36ab49dd44612e85eea1e63841ba6770f31db3a67"} Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.577966 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" event={"ID":"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b","Type":"ContainerStarted","Data":"c3a7591d2eb9c34a5c88911faf670445074d147c974108fdbcc248d04dfc58d9"} Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.577994 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"730b4d31-76aa-48af-b5a5-44d29830cb54","Type":"ContainerStarted","Data":"c6a9be9ee24b761f9027f96a3e3c9d52ff5877727577357944309272aa8cf7ec"} Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.578380 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.694726426 podStartE2EDuration="15.577891812s" podCreationTimestamp="2026-02-04 11:47:14 +0000 UTC" firstStartedPulling="2026-02-04 11:47:15.524474509 +0000 UTC m=+1184.667178894" lastFinishedPulling="2026-02-04 11:47:28.407639895 +0000 UTC m=+1197.550344280" observedRunningTime="2026-02-04 11:47:29.573654901 +0000 UTC m=+1198.716359306" watchObservedRunningTime="2026-02-04 11:47:29.577891812 +0000 UTC m=+1198.720596197" Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.580883 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" event={"ID":"5a350ee5-d239-42fc-9665-b07c506eb400","Type":"ContainerStarted","Data":"b15d5d168be42972513e01a79c1d7d5f5269ee0336a28488c50d3801f0dfb7e6"} Feb 04 11:47:29 crc kubenswrapper[4728]: I0204 11:47:29.609497 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" podStartSLOduration=10.609482497 podStartE2EDuration="10.609482497s" podCreationTimestamp="2026-02-04 11:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:29.603583966 +0000 UTC m=+1198.746288351" watchObservedRunningTime="2026-02-04 11:47:29.609482497 +0000 UTC m=+1198.752186882" Feb 04 11:47:30 crc kubenswrapper[4728]: W0204 11:47:30.208564 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47ec0af4_1025_4dec_9270_86a8ad62ba47.slice/crio-c9e3915974bb597e82aa3e5a5d8f28047618f9d911949b9031306b47957056ba WatchSource:0}: Error finding container c9e3915974bb597e82aa3e5a5d8f28047618f9d911949b9031306b47957056ba: Status 404 returned error can't find the container with id c9e3915974bb597e82aa3e5a5d8f28047618f9d911949b9031306b47957056ba Feb 04 11:47:30 crc kubenswrapper[4728]: I0204 11:47:30.328308 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84679c4c57-hc428" Feb 04 11:47:30 crc kubenswrapper[4728]: I0204 11:47:30.399329 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cd5549f4d-zhk8w"] Feb 04 11:47:30 crc kubenswrapper[4728]: I0204 11:47:30.399609 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cd5549f4d-zhk8w" podUID="beab3e33-962c-46f9-ac60-a8a739d86cac" containerName="neutron-api" containerID="cri-o://1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6" gracePeriod=30 Feb 04 11:47:30 crc kubenswrapper[4728]: I0204 11:47:30.399757 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cd5549f4d-zhk8w" podUID="beab3e33-962c-46f9-ac60-a8a739d86cac" containerName="neutron-httpd" containerID="cri-o://44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256" gracePeriod=30 Feb 04 11:47:30 crc kubenswrapper[4728]: I0204 11:47:30.637577 4728 generic.go:334] "Generic (PLEG): container finished" podID="beab3e33-962c-46f9-ac60-a8a739d86cac" containerID="44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256" exitCode=0 Feb 04 11:47:30 crc kubenswrapper[4728]: I0204 11:47:30.638812 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cd5549f4d-zhk8w" event={"ID":"beab3e33-962c-46f9-ac60-a8a739d86cac","Type":"ContainerDied","Data":"44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256"} Feb 04 11:47:30 crc kubenswrapper[4728]: I0204 11:47:30.641504 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7bd84699b9-9ldwf" event={"ID":"47ec0af4-1025-4dec-9270-86a8ad62ba47","Type":"ContainerStarted","Data":"c9e3915974bb597e82aa3e5a5d8f28047618f9d911949b9031306b47957056ba"} Feb 04 11:47:30 crc kubenswrapper[4728]: I0204 11:47:30.642501 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" event={"ID":"235ea075-2aaf-4f43-a38a-83f118af4592","Type":"ContainerStarted","Data":"880d99eb3f4cfd052969b74d8e14eb8cad46b1091338653696099a5c23f29b5a"} Feb 04 11:47:30 crc kubenswrapper[4728]: I0204 11:47:30.643889 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647757f45c-fclw2" event={"ID":"4a48c806-c596-4c79-8b6a-123a94b9f557","Type":"ContainerStarted","Data":"0a04fb87e03d0454a7581b7bbb099948eafa6f482417998c354ecfc713625624"} Feb 04 11:47:30 crc kubenswrapper[4728]: I0204 11:47:30.645320 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-546d7984c6-n6fdl" event={"ID":"eec7f6cd-431d-4a8f-8850-27aeb6a18f37","Type":"ContainerStarted","Data":"64d5c0f8ed4f30bebe22059665b5ec1acf288b856913b93d9cbe8900f0bda302"} Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.693950 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-546d7984c6-n6fdl" event={"ID":"eec7f6cd-431d-4a8f-8850-27aeb6a18f37","Type":"ContainerStarted","Data":"063bcf788dda5baa329fb3344a7d70617ab7d259fc7c12b447525387a9fb9846"} Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.695237 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.702418 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"730b4d31-76aa-48af-b5a5-44d29830cb54","Type":"ContainerStarted","Data":"0b1a450e517be9022aaf55ecb7221bd13c8dba499d9aed422c40f42121a1612c"} Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.704997 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-68d54cf779-mw5kw" podUID="a07d0233-6135-4630-90b5-3a8c3b7d2a97" containerName="heat-api" containerID="cri-o://01cca5f9fd67a4a20c7bb8dd603211aa5f56a4ae3ebaef5da0b100e5a9560f7d" gracePeriod=60 Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.705682 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68d54cf779-mw5kw" event={"ID":"a07d0233-6135-4630-90b5-3a8c3b7d2a97","Type":"ContainerStarted","Data":"01cca5f9fd67a4a20c7bb8dd603211aa5f56a4ae3ebaef5da0b100e5a9560f7d"} Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.705962 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.723923 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7bd84699b9-9ldwf" event={"ID":"47ec0af4-1025-4dec-9270-86a8ad62ba47","Type":"ContainerStarted","Data":"962d2b65df342c474c217850370c663cb8c07683406e5c87f24f3b7b68853ec0"} Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.724448 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.729128 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" event={"ID":"5a350ee5-d239-42fc-9665-b07c506eb400","Type":"ContainerStarted","Data":"6a0503479643bb1a899c3ee5ecceeb9825e2ec5b17e9d67e3f63a01d0a48c36c"} Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.730171 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.743304 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-86487d557f-7kzbg" podUID="4fd56842-4788-44ba-952e-76b8808474a6" containerName="heat-cfnapi" containerID="cri-o://9d521f00d9f66c3697767f5fda94c81721ff60ef59a102b3072eefab17f956c9" gracePeriod=60 Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.743588 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86487d557f-7kzbg" event={"ID":"4fd56842-4788-44ba-952e-76b8808474a6","Type":"ContainerStarted","Data":"9d521f00d9f66c3697767f5fda94c81721ff60ef59a102b3072eefab17f956c9"} Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.743645 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.750423 4728 generic.go:334] "Generic (PLEG): container finished" podID="235ea075-2aaf-4f43-a38a-83f118af4592" containerID="6e7b7ffc7298fc63e4e1c56257ec9287b496c7bdb0d1dd3848fa9c571d1fa51b" exitCode=1 Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.750513 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" event={"ID":"235ea075-2aaf-4f43-a38a-83f118af4592","Type":"ContainerDied","Data":"6e7b7ffc7298fc63e4e1c56257ec9287b496c7bdb0d1dd3848fa9c571d1fa51b"} Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.751871 4728 scope.go:117] "RemoveContainer" containerID="6e7b7ffc7298fc63e4e1c56257ec9287b496c7bdb0d1dd3848fa9c571d1fa51b" Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.755214 4728 generic.go:334] "Generic (PLEG): container finished" podID="4a48c806-c596-4c79-8b6a-123a94b9f557" containerID="432060e443e891774fff1fbbce10aebc696b770ad813827b9337e48640d7a854" exitCode=1 Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.755611 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647757f45c-fclw2" event={"ID":"4a48c806-c596-4c79-8b6a-123a94b9f557","Type":"ContainerDied","Data":"432060e443e891774fff1fbbce10aebc696b770ad813827b9337e48640d7a854"} Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.756566 4728 scope.go:117] "RemoveContainer" containerID="432060e443e891774fff1fbbce10aebc696b770ad813827b9337e48640d7a854" Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.781275 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7bd84699b9-9ldwf" podStartSLOduration=6.781258974 podStartE2EDuration="6.781258974s" podCreationTimestamp="2026-02-04 11:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:31.780057325 +0000 UTC m=+1200.922761710" watchObservedRunningTime="2026-02-04 11:47:31.781258974 +0000 UTC m=+1200.923963349" Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.811182 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" podStartSLOduration=3.3774847709999998 podStartE2EDuration="4.81116304s" podCreationTimestamp="2026-02-04 11:47:27 +0000 UTC" firstStartedPulling="2026-02-04 11:47:29.071596989 +0000 UTC m=+1198.214301364" lastFinishedPulling="2026-02-04 11:47:30.505275258 +0000 UTC m=+1199.647979633" observedRunningTime="2026-02-04 11:47:31.803486256 +0000 UTC m=+1200.946190641" watchObservedRunningTime="2026-02-04 11:47:31.81116304 +0000 UTC m=+1200.953867425" Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.832370 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-86487d557f-7kzbg" podStartSLOduration=3.059896271 podStartE2EDuration="12.832353387s" podCreationTimestamp="2026-02-04 11:47:19 +0000 UTC" firstStartedPulling="2026-02-04 11:47:20.774549821 +0000 UTC m=+1189.917254206" lastFinishedPulling="2026-02-04 11:47:30.547006937 +0000 UTC m=+1199.689711322" observedRunningTime="2026-02-04 11:47:31.822364638 +0000 UTC m=+1200.965069043" watchObservedRunningTime="2026-02-04 11:47:31.832353387 +0000 UTC m=+1200.975057772" Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.846353 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-546d7984c6-n6fdl" podStartSLOduration=4.084795862 podStartE2EDuration="4.846337722s" podCreationTimestamp="2026-02-04 11:47:27 +0000 UTC" firstStartedPulling="2026-02-04 11:47:29.693556078 +0000 UTC m=+1198.836260503" lastFinishedPulling="2026-02-04 11:47:30.455097978 +0000 UTC m=+1199.597802363" observedRunningTime="2026-02-04 11:47:31.841103476 +0000 UTC m=+1200.983807851" watchObservedRunningTime="2026-02-04 11:47:31.846337722 +0000 UTC m=+1200.989042107" Feb 04 11:47:31 crc kubenswrapper[4728]: I0204 11:47:31.874100 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-68d54cf779-mw5kw" podStartSLOduration=3.339729898 podStartE2EDuration="12.874075986s" podCreationTimestamp="2026-02-04 11:47:19 +0000 UTC" firstStartedPulling="2026-02-04 11:47:20.759900031 +0000 UTC m=+1189.902604416" lastFinishedPulling="2026-02-04 11:47:30.294246119 +0000 UTC m=+1199.436950504" observedRunningTime="2026-02-04 11:47:31.858085542 +0000 UTC m=+1201.000789927" watchObservedRunningTime="2026-02-04 11:47:31.874075986 +0000 UTC m=+1201.016780371" Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.773783 4728 generic.go:334] "Generic (PLEG): container finished" podID="a07d0233-6135-4630-90b5-3a8c3b7d2a97" containerID="01cca5f9fd67a4a20c7bb8dd603211aa5f56a4ae3ebaef5da0b100e5a9560f7d" exitCode=0 Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.774130 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68d54cf779-mw5kw" event={"ID":"a07d0233-6135-4630-90b5-3a8c3b7d2a97","Type":"ContainerDied","Data":"01cca5f9fd67a4a20c7bb8dd603211aa5f56a4ae3ebaef5da0b100e5a9560f7d"} Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.781425 4728 generic.go:334] "Generic (PLEG): container finished" podID="4fd56842-4788-44ba-952e-76b8808474a6" containerID="9d521f00d9f66c3697767f5fda94c81721ff60ef59a102b3072eefab17f956c9" exitCode=0 Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.781499 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86487d557f-7kzbg" event={"ID":"4fd56842-4788-44ba-952e-76b8808474a6","Type":"ContainerDied","Data":"9d521f00d9f66c3697767f5fda94c81721ff60ef59a102b3072eefab17f956c9"} Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.781525 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-86487d557f-7kzbg" event={"ID":"4fd56842-4788-44ba-952e-76b8808474a6","Type":"ContainerDied","Data":"7b734b22de7edef82592ccf47efacade37d409892f77ba7df26e8f45fd67d4ec"} Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.781536 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b734b22de7edef82592ccf47efacade37d409892f77ba7df26e8f45fd67d4ec" Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.788985 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" event={"ID":"235ea075-2aaf-4f43-a38a-83f118af4592","Type":"ContainerStarted","Data":"afede4fbbe825fcb9ded527ea707226a216f6002726bd09fabb83f9aee2acbbe"} Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.789050 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.793939 4728 generic.go:334] "Generic (PLEG): container finished" podID="4a48c806-c596-4c79-8b6a-123a94b9f557" containerID="71266a841fafafd6783c0c85ed0134dc2a42352ca5c047f1995c8b727e42c9c9" exitCode=1 Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.794089 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647757f45c-fclw2" event={"ID":"4a48c806-c596-4c79-8b6a-123a94b9f557","Type":"ContainerDied","Data":"71266a841fafafd6783c0c85ed0134dc2a42352ca5c047f1995c8b727e42c9c9"} Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.795850 4728 scope.go:117] "RemoveContainer" containerID="432060e443e891774fff1fbbce10aebc696b770ad813827b9337e48640d7a854" Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.796447 4728 scope.go:117] "RemoveContainer" containerID="71266a841fafafd6783c0c85ed0134dc2a42352ca5c047f1995c8b727e42c9c9" Feb 04 11:47:32 crc kubenswrapper[4728]: E0204 11:47:32.796656 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-647757f45c-fclw2_openstack(4a48c806-c596-4c79-8b6a-123a94b9f557)\"" pod="openstack/heat-api-647757f45c-fclw2" podUID="4a48c806-c596-4c79-8b6a-123a94b9f557" Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.796856 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"730b4d31-76aa-48af-b5a5-44d29830cb54","Type":"ContainerStarted","Data":"b406825e6ac62fa8d801834249b7082f78ee826af7175056821582891aec862e"} Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.873724 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" podStartSLOduration=7.258078322 podStartE2EDuration="7.873692239s" podCreationTimestamp="2026-02-04 11:47:25 +0000 UTC" firstStartedPulling="2026-02-04 11:47:29.695543387 +0000 UTC m=+1198.838247772" lastFinishedPulling="2026-02-04 11:47:30.311157304 +0000 UTC m=+1199.453861689" observedRunningTime="2026-02-04 11:47:32.812150057 +0000 UTC m=+1201.954854442" watchObservedRunningTime="2026-02-04 11:47:32.873692239 +0000 UTC m=+1202.016396624" Feb 04 11:47:32 crc kubenswrapper[4728]: I0204 11:47:32.991328 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.024906 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.029627 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data\") pod \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.029691 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-combined-ca-bundle\") pod \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.029774 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlkf9\" (UniqueName: \"kubernetes.io/projected/a07d0233-6135-4630-90b5-3a8c3b7d2a97-kube-api-access-tlkf9\") pod \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.029818 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-combined-ca-bundle\") pod \"4fd56842-4788-44ba-952e-76b8808474a6\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.029882 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw86f\" (UniqueName: \"kubernetes.io/projected/4fd56842-4788-44ba-952e-76b8808474a6-kube-api-access-cw86f\") pod \"4fd56842-4788-44ba-952e-76b8808474a6\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.029908 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data-custom\") pod \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\" (UID: \"a07d0233-6135-4630-90b5-3a8c3b7d2a97\") " Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.029936 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data\") pod \"4fd56842-4788-44ba-952e-76b8808474a6\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.029965 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data-custom\") pod \"4fd56842-4788-44ba-952e-76b8808474a6\" (UID: \"4fd56842-4788-44ba-952e-76b8808474a6\") " Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.046171 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a07d0233-6135-4630-90b5-3a8c3b7d2a97" (UID: "a07d0233-6135-4630-90b5-3a8c3b7d2a97"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.047909 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4fd56842-4788-44ba-952e-76b8808474a6" (UID: "4fd56842-4788-44ba-952e-76b8808474a6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.060447 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd56842-4788-44ba-952e-76b8808474a6-kube-api-access-cw86f" (OuterVolumeSpecName: "kube-api-access-cw86f") pod "4fd56842-4788-44ba-952e-76b8808474a6" (UID: "4fd56842-4788-44ba-952e-76b8808474a6"). InnerVolumeSpecName "kube-api-access-cw86f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.070120 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a07d0233-6135-4630-90b5-3a8c3b7d2a97-kube-api-access-tlkf9" (OuterVolumeSpecName: "kube-api-access-tlkf9") pod "a07d0233-6135-4630-90b5-3a8c3b7d2a97" (UID: "a07d0233-6135-4630-90b5-3a8c3b7d2a97"). InnerVolumeSpecName "kube-api-access-tlkf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.109897 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a07d0233-6135-4630-90b5-3a8c3b7d2a97" (UID: "a07d0233-6135-4630-90b5-3a8c3b7d2a97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.126447 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fd56842-4788-44ba-952e-76b8808474a6" (UID: "4fd56842-4788-44ba-952e-76b8808474a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.131934 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.131972 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlkf9\" (UniqueName: \"kubernetes.io/projected/a07d0233-6135-4630-90b5-3a8c3b7d2a97-kube-api-access-tlkf9\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.131986 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.131996 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw86f\" (UniqueName: \"kubernetes.io/projected/4fd56842-4788-44ba-952e-76b8808474a6-kube-api-access-cw86f\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.132008 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.132021 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.171111 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data" (OuterVolumeSpecName: "config-data") pod "4fd56842-4788-44ba-952e-76b8808474a6" (UID: "4fd56842-4788-44ba-952e-76b8808474a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.211187 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data" (OuterVolumeSpecName: "config-data") pod "a07d0233-6135-4630-90b5-3a8c3b7d2a97" (UID: "a07d0233-6135-4630-90b5-3a8c3b7d2a97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.239280 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07d0233-6135-4630-90b5-3a8c3b7d2a97-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.239331 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd56842-4788-44ba-952e-76b8808474a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.548298 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-mz5w2"] Feb 04 11:47:33 crc kubenswrapper[4728]: E0204 11:47:33.549035 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a07d0233-6135-4630-90b5-3a8c3b7d2a97" containerName="heat-api" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.549058 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a07d0233-6135-4630-90b5-3a8c3b7d2a97" containerName="heat-api" Feb 04 11:47:33 crc kubenswrapper[4728]: E0204 11:47:33.549079 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd56842-4788-44ba-952e-76b8808474a6" containerName="heat-cfnapi" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.549089 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd56842-4788-44ba-952e-76b8808474a6" containerName="heat-cfnapi" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.549362 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd56842-4788-44ba-952e-76b8808474a6" containerName="heat-cfnapi" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.549387 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a07d0233-6135-4630-90b5-3a8c3b7d2a97" containerName="heat-api" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.550158 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mz5w2" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.563642 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mz5w2"] Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.645906 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wggnf"] Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.647003 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wggnf" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.650809 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-operator-scripts\") pod \"nova-api-db-create-mz5w2\" (UID: \"8ffe9e93-b683-4326-b0dd-ec6eb798ab50\") " pod="openstack/nova-api-db-create-mz5w2" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.651111 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpqv6\" (UniqueName: \"kubernetes.io/projected/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-kube-api-access-cpqv6\") pod \"nova-api-db-create-mz5w2\" (UID: \"8ffe9e93-b683-4326-b0dd-ec6eb798ab50\") " pod="openstack/nova-api-db-create-mz5w2" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.663081 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wggnf"] Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.752223 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ab4b-account-create-update-mk5gq"] Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.753011 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99jf\" (UniqueName: \"kubernetes.io/projected/e23d5efb-8f3f-40cf-992b-00aa2416f23b-kube-api-access-v99jf\") pod \"nova-cell0-db-create-wggnf\" (UID: \"e23d5efb-8f3f-40cf-992b-00aa2416f23b\") " pod="openstack/nova-cell0-db-create-wggnf" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.753061 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpqv6\" (UniqueName: \"kubernetes.io/projected/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-kube-api-access-cpqv6\") pod \"nova-api-db-create-mz5w2\" (UID: \"8ffe9e93-b683-4326-b0dd-ec6eb798ab50\") " pod="openstack/nova-api-db-create-mz5w2" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.753139 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-operator-scripts\") pod \"nova-api-db-create-mz5w2\" (UID: \"8ffe9e93-b683-4326-b0dd-ec6eb798ab50\") " pod="openstack/nova-api-db-create-mz5w2" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.753172 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e23d5efb-8f3f-40cf-992b-00aa2416f23b-operator-scripts\") pod \"nova-cell0-db-create-wggnf\" (UID: \"e23d5efb-8f3f-40cf-992b-00aa2416f23b\") " pod="openstack/nova-cell0-db-create-wggnf" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.753475 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ab4b-account-create-update-mk5gq" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.754623 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-operator-scripts\") pod \"nova-api-db-create-mz5w2\" (UID: \"8ffe9e93-b683-4326-b0dd-ec6eb798ab50\") " pod="openstack/nova-api-db-create-mz5w2" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.756318 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.764312 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-v7lw4"] Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.766084 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v7lw4" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.777182 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ab4b-account-create-update-mk5gq"] Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.790437 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpqv6\" (UniqueName: \"kubernetes.io/projected/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-kube-api-access-cpqv6\") pod \"nova-api-db-create-mz5w2\" (UID: \"8ffe9e93-b683-4326-b0dd-ec6eb798ab50\") " pod="openstack/nova-api-db-create-mz5w2" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.799727 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-v7lw4"] Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.809176 4728 generic.go:334] "Generic (PLEG): container finished" podID="235ea075-2aaf-4f43-a38a-83f118af4592" containerID="afede4fbbe825fcb9ded527ea707226a216f6002726bd09fabb83f9aee2acbbe" exitCode=1 Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.809303 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" event={"ID":"235ea075-2aaf-4f43-a38a-83f118af4592","Type":"ContainerDied","Data":"afede4fbbe825fcb9ded527ea707226a216f6002726bd09fabb83f9aee2acbbe"} Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.809389 4728 scope.go:117] "RemoveContainer" containerID="6e7b7ffc7298fc63e4e1c56257ec9287b496c7bdb0d1dd3848fa9c571d1fa51b" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.810137 4728 scope.go:117] "RemoveContainer" containerID="afede4fbbe825fcb9ded527ea707226a216f6002726bd09fabb83f9aee2acbbe" Feb 04 11:47:33 crc kubenswrapper[4728]: E0204 11:47:33.810406 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-84d95cd6d8-jk9zt_openstack(235ea075-2aaf-4f43-a38a-83f118af4592)\"" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" podUID="235ea075-2aaf-4f43-a38a-83f118af4592" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.827286 4728 scope.go:117] "RemoveContainer" containerID="71266a841fafafd6783c0c85ed0134dc2a42352ca5c047f1995c8b727e42c9c9" Feb 04 11:47:33 crc kubenswrapper[4728]: E0204 11:47:33.827693 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-647757f45c-fclw2_openstack(4a48c806-c596-4c79-8b6a-123a94b9f557)\"" pod="openstack/heat-api-647757f45c-fclw2" podUID="4a48c806-c596-4c79-8b6a-123a94b9f557" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.845215 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"730b4d31-76aa-48af-b5a5-44d29830cb54","Type":"ContainerStarted","Data":"8b0149ad79811bcc633bf91f6c7bd99c94b3b9ecd8b7d68032875501fcdff921"} Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.856197 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c91a14-9080-4840-bc0e-9e6b103d9d01-operator-scripts\") pod \"nova-api-ab4b-account-create-update-mk5gq\" (UID: \"92c91a14-9080-4840-bc0e-9e6b103d9d01\") " pod="openstack/nova-api-ab4b-account-create-update-mk5gq" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.856412 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e23d5efb-8f3f-40cf-992b-00aa2416f23b-operator-scripts\") pod \"nova-cell0-db-create-wggnf\" (UID: \"e23d5efb-8f3f-40cf-992b-00aa2416f23b\") " pod="openstack/nova-cell0-db-create-wggnf" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.856504 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6bf70d2-1257-434e-9597-b4c98e4bb63b-operator-scripts\") pod \"nova-cell1-db-create-v7lw4\" (UID: \"e6bf70d2-1257-434e-9597-b4c98e4bb63b\") " pod="openstack/nova-cell1-db-create-v7lw4" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.856567 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxcmp\" (UniqueName: \"kubernetes.io/projected/e6bf70d2-1257-434e-9597-b4c98e4bb63b-kube-api-access-lxcmp\") pod \"nova-cell1-db-create-v7lw4\" (UID: \"e6bf70d2-1257-434e-9597-b4c98e4bb63b\") " pod="openstack/nova-cell1-db-create-v7lw4" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.856640 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v99jf\" (UniqueName: \"kubernetes.io/projected/e23d5efb-8f3f-40cf-992b-00aa2416f23b-kube-api-access-v99jf\") pod \"nova-cell0-db-create-wggnf\" (UID: \"e23d5efb-8f3f-40cf-992b-00aa2416f23b\") " pod="openstack/nova-cell0-db-create-wggnf" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.856679 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln66j\" (UniqueName: \"kubernetes.io/projected/92c91a14-9080-4840-bc0e-9e6b103d9d01-kube-api-access-ln66j\") pod \"nova-api-ab4b-account-create-update-mk5gq\" (UID: \"92c91a14-9080-4840-bc0e-9e6b103d9d01\") " pod="openstack/nova-api-ab4b-account-create-update-mk5gq" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.859798 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e23d5efb-8f3f-40cf-992b-00aa2416f23b-operator-scripts\") pod \"nova-cell0-db-create-wggnf\" (UID: \"e23d5efb-8f3f-40cf-992b-00aa2416f23b\") " pod="openstack/nova-cell0-db-create-wggnf" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.864387 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-86487d557f-7kzbg" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.865538 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68d54cf779-mw5kw" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.866021 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68d54cf779-mw5kw" event={"ID":"a07d0233-6135-4630-90b5-3a8c3b7d2a97","Type":"ContainerDied","Data":"815cd564e4b69d669ea84e55f2da4be6669d2c9846a79ed70a6b325bc8400f05"} Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.876529 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v99jf\" (UniqueName: \"kubernetes.io/projected/e23d5efb-8f3f-40cf-992b-00aa2416f23b-kube-api-access-v99jf\") pod \"nova-cell0-db-create-wggnf\" (UID: \"e23d5efb-8f3f-40cf-992b-00aa2416f23b\") " pod="openstack/nova-cell0-db-create-wggnf" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.881652 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mz5w2" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.948866 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7e37-account-create-update-svvcv"] Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.950181 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7e37-account-create-update-svvcv" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.957100 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.957841 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ed46b-54ee-4fe9-adca-5986b1befc95-operator-scripts\") pod \"nova-cell0-7e37-account-create-update-svvcv\" (UID: \"c63ed46b-54ee-4fe9-adca-5986b1befc95\") " pod="openstack/nova-cell0-7e37-account-create-update-svvcv" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.957890 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6bf70d2-1257-434e-9597-b4c98e4bb63b-operator-scripts\") pod \"nova-cell1-db-create-v7lw4\" (UID: \"e6bf70d2-1257-434e-9597-b4c98e4bb63b\") " pod="openstack/nova-cell1-db-create-v7lw4" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.957913 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfzzz\" (UniqueName: \"kubernetes.io/projected/c63ed46b-54ee-4fe9-adca-5986b1befc95-kube-api-access-qfzzz\") pod \"nova-cell0-7e37-account-create-update-svvcv\" (UID: \"c63ed46b-54ee-4fe9-adca-5986b1befc95\") " pod="openstack/nova-cell0-7e37-account-create-update-svvcv" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.957949 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxcmp\" (UniqueName: \"kubernetes.io/projected/e6bf70d2-1257-434e-9597-b4c98e4bb63b-kube-api-access-lxcmp\") pod \"nova-cell1-db-create-v7lw4\" (UID: \"e6bf70d2-1257-434e-9597-b4c98e4bb63b\") " pod="openstack/nova-cell1-db-create-v7lw4" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.958029 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln66j\" (UniqueName: \"kubernetes.io/projected/92c91a14-9080-4840-bc0e-9e6b103d9d01-kube-api-access-ln66j\") pod \"nova-api-ab4b-account-create-update-mk5gq\" (UID: \"92c91a14-9080-4840-bc0e-9e6b103d9d01\") " pod="openstack/nova-api-ab4b-account-create-update-mk5gq" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.958074 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c91a14-9080-4840-bc0e-9e6b103d9d01-operator-scripts\") pod \"nova-api-ab4b-account-create-update-mk5gq\" (UID: \"92c91a14-9080-4840-bc0e-9e6b103d9d01\") " pod="openstack/nova-api-ab4b-account-create-update-mk5gq" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.959156 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6bf70d2-1257-434e-9597-b4c98e4bb63b-operator-scripts\") pod \"nova-cell1-db-create-v7lw4\" (UID: \"e6bf70d2-1257-434e-9597-b4c98e4bb63b\") " pod="openstack/nova-cell1-db-create-v7lw4" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.961251 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c91a14-9080-4840-bc0e-9e6b103d9d01-operator-scripts\") pod \"nova-api-ab4b-account-create-update-mk5gq\" (UID: \"92c91a14-9080-4840-bc0e-9e6b103d9d01\") " pod="openstack/nova-api-ab4b-account-create-update-mk5gq" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.984188 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wggnf" Feb 04 11:47:33 crc kubenswrapper[4728]: I0204 11:47:33.994181 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln66j\" (UniqueName: \"kubernetes.io/projected/92c91a14-9080-4840-bc0e-9e6b103d9d01-kube-api-access-ln66j\") pod \"nova-api-ab4b-account-create-update-mk5gq\" (UID: \"92c91a14-9080-4840-bc0e-9e6b103d9d01\") " pod="openstack/nova-api-ab4b-account-create-update-mk5gq" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.005550 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7e37-account-create-update-svvcv"] Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.011338 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxcmp\" (UniqueName: \"kubernetes.io/projected/e6bf70d2-1257-434e-9597-b4c98e4bb63b-kube-api-access-lxcmp\") pod \"nova-cell1-db-create-v7lw4\" (UID: \"e6bf70d2-1257-434e-9597-b4c98e4bb63b\") " pod="openstack/nova-cell1-db-create-v7lw4" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.014786 4728 scope.go:117] "RemoveContainer" containerID="01cca5f9fd67a4a20c7bb8dd603211aa5f56a4ae3ebaef5da0b100e5a9560f7d" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.018363 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v7lw4" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.046061 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-86487d557f-7kzbg"] Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.059989 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ed46b-54ee-4fe9-adca-5986b1befc95-operator-scripts\") pod \"nova-cell0-7e37-account-create-update-svvcv\" (UID: \"c63ed46b-54ee-4fe9-adca-5986b1befc95\") " pod="openstack/nova-cell0-7e37-account-create-update-svvcv" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.060336 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfzzz\" (UniqueName: \"kubernetes.io/projected/c63ed46b-54ee-4fe9-adca-5986b1befc95-kube-api-access-qfzzz\") pod \"nova-cell0-7e37-account-create-update-svvcv\" (UID: \"c63ed46b-54ee-4fe9-adca-5986b1befc95\") " pod="openstack/nova-cell0-7e37-account-create-update-svvcv" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.061240 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ed46b-54ee-4fe9-adca-5986b1befc95-operator-scripts\") pod \"nova-cell0-7e37-account-create-update-svvcv\" (UID: \"c63ed46b-54ee-4fe9-adca-5986b1befc95\") " pod="openstack/nova-cell0-7e37-account-create-update-svvcv" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.061292 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-86487d557f-7kzbg"] Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.092975 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-68d54cf779-mw5kw"] Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.131284 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-68d54cf779-mw5kw"] Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.148767 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfzzz\" (UniqueName: \"kubernetes.io/projected/c63ed46b-54ee-4fe9-adca-5986b1befc95-kube-api-access-qfzzz\") pod \"nova-cell0-7e37-account-create-update-svvcv\" (UID: \"c63ed46b-54ee-4fe9-adca-5986b1befc95\") " pod="openstack/nova-cell0-7e37-account-create-update-svvcv" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.236780 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-81ae-account-create-update-c6p7w"] Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.238017 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.251102 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-81ae-account-create-update-c6p7w"] Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.253661 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.303174 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ab4b-account-create-update-mk5gq" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.336285 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7e37-account-create-update-svvcv" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.387085 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47twp\" (UniqueName: \"kubernetes.io/projected/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-kube-api-access-47twp\") pod \"nova-cell1-81ae-account-create-update-c6p7w\" (UID: \"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0\") " pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.387466 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-operator-scripts\") pod \"nova-cell1-81ae-account-create-update-c6p7w\" (UID: \"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0\") " pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.489223 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47twp\" (UniqueName: \"kubernetes.io/projected/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-kube-api-access-47twp\") pod \"nova-cell1-81ae-account-create-update-c6p7w\" (UID: \"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0\") " pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.489270 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-operator-scripts\") pod \"nova-cell1-81ae-account-create-update-c6p7w\" (UID: \"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0\") " pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.490120 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-operator-scripts\") pod \"nova-cell1-81ae-account-create-update-c6p7w\" (UID: \"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0\") " pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.553180 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47twp\" (UniqueName: \"kubernetes.io/projected/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-kube-api-access-47twp\") pod \"nova-cell1-81ae-account-create-update-c6p7w\" (UID: \"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0\") " pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.603166 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.685352 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mz5w2"] Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.747903 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.846409 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xwdgg"] Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.846643 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" podUID="a47e2788-f585-4894-8e5b-e3b81fdafa60" containerName="dnsmasq-dns" containerID="cri-o://0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2" gracePeriod=10 Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.876992 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mz5w2" event={"ID":"8ffe9e93-b683-4326-b0dd-ec6eb798ab50","Type":"ContainerStarted","Data":"260510aa5b50c6f46676cc297720ce58e882a4fe24c30dea1b52ffae25b6ce18"} Feb 04 11:47:34 crc kubenswrapper[4728]: I0204 11:47:34.900842 4728 scope.go:117] "RemoveContainer" containerID="afede4fbbe825fcb9ded527ea707226a216f6002726bd09fabb83f9aee2acbbe" Feb 04 11:47:34 crc kubenswrapper[4728]: E0204 11:47:34.901175 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-84d95cd6d8-jk9zt_openstack(235ea075-2aaf-4f43-a38a-83f118af4592)\"" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" podUID="235ea075-2aaf-4f43-a38a-83f118af4592" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.254638 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wggnf"] Feb 04 11:47:35 crc kubenswrapper[4728]: W0204 11:47:35.264903 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6bf70d2_1257_434e_9597_b4c98e4bb63b.slice/crio-c907416fffe4a074ab72892db73181a3dd392ec31ad5aa0198710c207e3b6aa3 WatchSource:0}: Error finding container c907416fffe4a074ab72892db73181a3dd392ec31ad5aa0198710c207e3b6aa3: Status 404 returned error can't find the container with id c907416fffe4a074ab72892db73181a3dd392ec31ad5aa0198710c207e3b6aa3 Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.273601 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-v7lw4"] Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.465277 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ab4b-account-create-update-mk5gq"] Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.511172 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7e37-account-create-update-svvcv"] Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.572681 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd56842-4788-44ba-952e-76b8808474a6" path="/var/lib/kubelet/pods/4fd56842-4788-44ba-952e-76b8808474a6/volumes" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.573306 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a07d0233-6135-4630-90b5-3a8c3b7d2a97" path="/var/lib/kubelet/pods/a07d0233-6135-4630-90b5-3a8c3b7d2a97/volumes" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.742614 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.764495 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-81ae-account-create-update-c6p7w"] Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.851718 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.919404 4728 generic.go:334] "Generic (PLEG): container finished" podID="beab3e33-962c-46f9-ac60-a8a739d86cac" containerID="1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6" exitCode=0 Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.919478 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cd5549f4d-zhk8w" event={"ID":"beab3e33-962c-46f9-ac60-a8a739d86cac","Type":"ContainerDied","Data":"1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6"} Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.919509 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cd5549f4d-zhk8w" event={"ID":"beab3e33-962c-46f9-ac60-a8a739d86cac","Type":"ContainerDied","Data":"842aa7633d9953929b0644cc56d041bb8f9b052cc337e98ad32444485220866d"} Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.919528 4728 scope.go:117] "RemoveContainer" containerID="44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.919672 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cd5549f4d-zhk8w" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.924216 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.931598 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-combined-ca-bundle\") pod \"beab3e33-962c-46f9-ac60-a8a739d86cac\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.931686 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-ovndb-tls-certs\") pod \"beab3e33-962c-46f9-ac60-a8a739d86cac\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.931782 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5d4d\" (UniqueName: \"kubernetes.io/projected/beab3e33-962c-46f9-ac60-a8a739d86cac-kube-api-access-d5d4d\") pod \"beab3e33-962c-46f9-ac60-a8a739d86cac\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.931934 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-config\") pod \"beab3e33-962c-46f9-ac60-a8a739d86cac\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.932003 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-httpd-config\") pod \"beab3e33-962c-46f9-ac60-a8a739d86cac\" (UID: \"beab3e33-962c-46f9-ac60-a8a739d86cac\") " Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.943995 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beab3e33-962c-46f9-ac60-a8a739d86cac-kube-api-access-d5d4d" (OuterVolumeSpecName: "kube-api-access-d5d4d") pod "beab3e33-962c-46f9-ac60-a8a739d86cac" (UID: "beab3e33-962c-46f9-ac60-a8a739d86cac"). InnerVolumeSpecName "kube-api-access-d5d4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.954023 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" event={"ID":"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0","Type":"ContainerStarted","Data":"ffebdf502ae1638e9f935b8ee0fb8ee21eca7ba410bcc4119d314cdf7476cfda"} Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.960788 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.961859 4728 scope.go:117] "RemoveContainer" containerID="71266a841fafafd6783c0c85ed0134dc2a42352ca5c047f1995c8b727e42c9c9" Feb 04 11:47:35 crc kubenswrapper[4728]: E0204 11:47:35.962116 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-647757f45c-fclw2_openstack(4a48c806-c596-4c79-8b6a-123a94b9f557)\"" pod="openstack/heat-api-647757f45c-fclw2" podUID="4a48c806-c596-4c79-8b6a-123a94b9f557" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.962507 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.973179 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v7lw4" event={"ID":"e6bf70d2-1257-434e-9597-b4c98e4bb63b","Type":"ContainerStarted","Data":"c907416fffe4a074ab72892db73181a3dd392ec31ad5aa0198710c207e3b6aa3"} Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.975044 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "beab3e33-962c-46f9-ac60-a8a739d86cac" (UID: "beab3e33-962c-46f9-ac60-a8a739d86cac"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.993672 4728 generic.go:334] "Generic (PLEG): container finished" podID="a47e2788-f585-4894-8e5b-e3b81fdafa60" containerID="0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2" exitCode=0 Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.993794 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" event={"ID":"a47e2788-f585-4894-8e5b-e3b81fdafa60","Type":"ContainerDied","Data":"0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2"} Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.993824 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" event={"ID":"a47e2788-f585-4894-8e5b-e3b81fdafa60","Type":"ContainerDied","Data":"a1ef2bd7a13a4645b4504fcae46249b6e82511553ac5e4de1614bf471a895b79"} Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.993882 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-xwdgg" Feb 04 11:47:35 crc kubenswrapper[4728]: I0204 11:47:35.999304 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ab4b-account-create-update-mk5gq" event={"ID":"92c91a14-9080-4840-bc0e-9e6b103d9d01","Type":"ContainerStarted","Data":"d2c8872aa97868eca72325a14c0b5858e1b6c9e9fd8a81bf978462df6263d1ef"} Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.009390 4728 scope.go:117] "RemoveContainer" containerID="1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.009529 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-v7lw4" podStartSLOduration=3.009515481 podStartE2EDuration="3.009515481s" podCreationTimestamp="2026-02-04 11:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:35.993509488 +0000 UTC m=+1205.136213873" watchObservedRunningTime="2026-02-04 11:47:36.009515481 +0000 UTC m=+1205.152219866" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.015475 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7e37-account-create-update-svvcv" event={"ID":"c63ed46b-54ee-4fe9-adca-5986b1befc95","Type":"ContainerStarted","Data":"dab6ec6aedc0fcf881ba860239b24f444e09ab73a96b2c3e8e67e3b6551d5801"} Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.034612 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-svc\") pod \"a47e2788-f585-4894-8e5b-e3b81fdafa60\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.034797 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c99m4\" (UniqueName: \"kubernetes.io/projected/a47e2788-f585-4894-8e5b-e3b81fdafa60-kube-api-access-c99m4\") pod \"a47e2788-f585-4894-8e5b-e3b81fdafa60\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.034830 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-sb\") pod \"a47e2788-f585-4894-8e5b-e3b81fdafa60\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.034914 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-config\") pod \"a47e2788-f585-4894-8e5b-e3b81fdafa60\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.034938 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-nb\") pod \"a47e2788-f585-4894-8e5b-e3b81fdafa60\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.034994 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-swift-storage-0\") pod \"a47e2788-f585-4894-8e5b-e3b81fdafa60\" (UID: \"a47e2788-f585-4894-8e5b-e3b81fdafa60\") " Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.035595 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.035619 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5d4d\" (UniqueName: \"kubernetes.io/projected/beab3e33-962c-46f9-ac60-a8a739d86cac-kube-api-access-d5d4d\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.057253 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wggnf" event={"ID":"e23d5efb-8f3f-40cf-992b-00aa2416f23b","Type":"ContainerStarted","Data":"d9c3d4a255e4b4b6367a727dea0b48cf1404b72f97ad4fddd862953f8ebe992f"} Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.057300 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wggnf" event={"ID":"e23d5efb-8f3f-40cf-992b-00aa2416f23b","Type":"ContainerStarted","Data":"ea6a0e00476013b376e539a2ca0d870ab7c404e5a9cf91e40fed599d1732d4f8"} Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.064857 4728 generic.go:334] "Generic (PLEG): container finished" podID="8ffe9e93-b683-4326-b0dd-ec6eb798ab50" containerID="faadb80ff7fbc70da95feb229b34b7951cbaa1d0bad0e3c8e0fb8784523c2f0c" exitCode=0 Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.065252 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mz5w2" event={"ID":"8ffe9e93-b683-4326-b0dd-ec6eb798ab50","Type":"ContainerDied","Data":"faadb80ff7fbc70da95feb229b34b7951cbaa1d0bad0e3c8e0fb8784523c2f0c"} Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.065790 4728 scope.go:117] "RemoveContainer" containerID="afede4fbbe825fcb9ded527ea707226a216f6002726bd09fabb83f9aee2acbbe" Feb 04 11:47:36 crc kubenswrapper[4728]: E0204 11:47:36.066693 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-84d95cd6d8-jk9zt_openstack(235ea075-2aaf-4f43-a38a-83f118af4592)\"" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" podUID="235ea075-2aaf-4f43-a38a-83f118af4592" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.076487 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-wggnf" podStartSLOduration=3.076465393 podStartE2EDuration="3.076465393s" podCreationTimestamp="2026-02-04 11:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:47:36.075951531 +0000 UTC m=+1205.218655926" watchObservedRunningTime="2026-02-04 11:47:36.076465393 +0000 UTC m=+1205.219169778" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.087026 4728 scope.go:117] "RemoveContainer" containerID="44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256" Feb 04 11:47:36 crc kubenswrapper[4728]: E0204 11:47:36.095008 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256\": container with ID starting with 44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256 not found: ID does not exist" containerID="44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.095048 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256"} err="failed to get container status \"44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256\": rpc error: code = NotFound desc = could not find container \"44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256\": container with ID starting with 44bd116c9df94b39bc210043ee1f825fd9eddacd3ba20bf277ac5841c5c26256 not found: ID does not exist" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.095073 4728 scope.go:117] "RemoveContainer" containerID="1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.100344 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a47e2788-f585-4894-8e5b-e3b81fdafa60-kube-api-access-c99m4" (OuterVolumeSpecName: "kube-api-access-c99m4") pod "a47e2788-f585-4894-8e5b-e3b81fdafa60" (UID: "a47e2788-f585-4894-8e5b-e3b81fdafa60"). InnerVolumeSpecName "kube-api-access-c99m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:36 crc kubenswrapper[4728]: E0204 11:47:36.100704 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6\": container with ID starting with 1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6 not found: ID does not exist" containerID="1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.100765 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6"} err="failed to get container status \"1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6\": rpc error: code = NotFound desc = could not find container \"1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6\": container with ID starting with 1dece61927925b7f45a67b59cd4ce7d59dd94a0eedd6258de8d680ed3ca7a9d6 not found: ID does not exist" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.100796 4728 scope.go:117] "RemoveContainer" containerID="0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.177937 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c99m4\" (UniqueName: \"kubernetes.io/projected/a47e2788-f585-4894-8e5b-e3b81fdafa60-kube-api-access-c99m4\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.230146 4728 scope.go:117] "RemoveContainer" containerID="438b887b8e447c64fda07c84accd52f88b732f0dc62292893cc841aa1fc848b5" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.350510 4728 scope.go:117] "RemoveContainer" containerID="0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2" Feb 04 11:47:36 crc kubenswrapper[4728]: E0204 11:47:36.351497 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2\": container with ID starting with 0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2 not found: ID does not exist" containerID="0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.351529 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2"} err="failed to get container status \"0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2\": rpc error: code = NotFound desc = could not find container \"0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2\": container with ID starting with 0e50c148a624fddd38e92937716944231539d28eda5076542425c468c984f2c2 not found: ID does not exist" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.351553 4728 scope.go:117] "RemoveContainer" containerID="438b887b8e447c64fda07c84accd52f88b732f0dc62292893cc841aa1fc848b5" Feb 04 11:47:36 crc kubenswrapper[4728]: E0204 11:47:36.352108 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"438b887b8e447c64fda07c84accd52f88b732f0dc62292893cc841aa1fc848b5\": container with ID starting with 438b887b8e447c64fda07c84accd52f88b732f0dc62292893cc841aa1fc848b5 not found: ID does not exist" containerID="438b887b8e447c64fda07c84accd52f88b732f0dc62292893cc841aa1fc848b5" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.352134 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"438b887b8e447c64fda07c84accd52f88b732f0dc62292893cc841aa1fc848b5"} err="failed to get container status \"438b887b8e447c64fda07c84accd52f88b732f0dc62292893cc841aa1fc848b5\": rpc error: code = NotFound desc = could not find container \"438b887b8e447c64fda07c84accd52f88b732f0dc62292893cc841aa1fc848b5\": container with ID starting with 438b887b8e447c64fda07c84accd52f88b732f0dc62292893cc841aa1fc848b5 not found: ID does not exist" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.510795 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a47e2788-f585-4894-8e5b-e3b81fdafa60" (UID: "a47e2788-f585-4894-8e5b-e3b81fdafa60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.524265 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a47e2788-f585-4894-8e5b-e3b81fdafa60" (UID: "a47e2788-f585-4894-8e5b-e3b81fdafa60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.536909 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-config" (OuterVolumeSpecName: "config") pod "beab3e33-962c-46f9-ac60-a8a739d86cac" (UID: "beab3e33-962c-46f9-ac60-a8a739d86cac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.551868 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "beab3e33-962c-46f9-ac60-a8a739d86cac" (UID: "beab3e33-962c-46f9-ac60-a8a739d86cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.552161 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "beab3e33-962c-46f9-ac60-a8a739d86cac" (UID: "beab3e33-962c-46f9-ac60-a8a739d86cac"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.553643 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a47e2788-f585-4894-8e5b-e3b81fdafa60" (UID: "a47e2788-f585-4894-8e5b-e3b81fdafa60"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.577338 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a47e2788-f585-4894-8e5b-e3b81fdafa60" (UID: "a47e2788-f585-4894-8e5b-e3b81fdafa60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.577956 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-config" (OuterVolumeSpecName: "config") pod "a47e2788-f585-4894-8e5b-e3b81fdafa60" (UID: "a47e2788-f585-4894-8e5b-e3b81fdafa60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.594629 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.595145 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.595227 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.595358 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.595422 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.595481 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.595537 4728 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/beab3e33-962c-46f9-ac60-a8a739d86cac-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.595599 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a47e2788-f585-4894-8e5b-e3b81fdafa60-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.771528 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xwdgg"] Feb 04 11:47:36 crc kubenswrapper[4728]: I0204 11:47:36.793800 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-xwdgg"] Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.041602 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cd5549f4d-zhk8w"] Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.053424 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7cd5549f4d-zhk8w"] Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.116040 4728 generic.go:334] "Generic (PLEG): container finished" podID="92c91a14-9080-4840-bc0e-9e6b103d9d01" containerID="10ad17bc3016f1455b1c2b70dabdb568661c4df8e73566036664c1fca6c3e097" exitCode=0 Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.116421 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ab4b-account-create-update-mk5gq" event={"ID":"92c91a14-9080-4840-bc0e-9e6b103d9d01","Type":"ContainerDied","Data":"10ad17bc3016f1455b1c2b70dabdb568661c4df8e73566036664c1fca6c3e097"} Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.118645 4728 generic.go:334] "Generic (PLEG): container finished" podID="e23d5efb-8f3f-40cf-992b-00aa2416f23b" containerID="d9c3d4a255e4b4b6367a727dea0b48cf1404b72f97ad4fddd862953f8ebe992f" exitCode=0 Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.118707 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wggnf" event={"ID":"e23d5efb-8f3f-40cf-992b-00aa2416f23b","Type":"ContainerDied","Data":"d9c3d4a255e4b4b6367a727dea0b48cf1404b72f97ad4fddd862953f8ebe992f"} Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.120588 4728 generic.go:334] "Generic (PLEG): container finished" podID="2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0" containerID="b2e424e97a1b32bd70eddc0a5c12764e595ce2f0c4e993c5e0c0a465963210a0" exitCode=0 Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.120650 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" event={"ID":"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0","Type":"ContainerDied","Data":"b2e424e97a1b32bd70eddc0a5c12764e595ce2f0c4e993c5e0c0a465963210a0"} Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.122359 4728 generic.go:334] "Generic (PLEG): container finished" podID="e6bf70d2-1257-434e-9597-b4c98e4bb63b" containerID="c3f76dd9e78445b250116ba41045e59d04fcdbd9f472e417dbde64ab8f860aca" exitCode=0 Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.122431 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v7lw4" event={"ID":"e6bf70d2-1257-434e-9597-b4c98e4bb63b","Type":"ContainerDied","Data":"c3f76dd9e78445b250116ba41045e59d04fcdbd9f472e417dbde64ab8f860aca"} Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.125780 4728 generic.go:334] "Generic (PLEG): container finished" podID="c63ed46b-54ee-4fe9-adca-5986b1befc95" containerID="f10851a5a2dcbd0e973ec2bf9a3dbf09487aaa302e171efebde68e286d1bae6e" exitCode=0 Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.125854 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7e37-account-create-update-svvcv" event={"ID":"c63ed46b-54ee-4fe9-adca-5986b1befc95","Type":"ContainerDied","Data":"f10851a5a2dcbd0e973ec2bf9a3dbf09487aaa302e171efebde68e286d1bae6e"} Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.128063 4728 scope.go:117] "RemoveContainer" containerID="71266a841fafafd6783c0c85ed0134dc2a42352ca5c047f1995c8b727e42c9c9" Feb 04 11:47:37 crc kubenswrapper[4728]: E0204 11:47:37.128279 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-647757f45c-fclw2_openstack(4a48c806-c596-4c79-8b6a-123a94b9f557)\"" pod="openstack/heat-api-647757f45c-fclw2" podUID="4a48c806-c596-4c79-8b6a-123a94b9f557" Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.226814 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.400879 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85f76984f4-b8kmh" Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.490682 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5dc5978b96-n87bc"] Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.491063 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5dc5978b96-n87bc" podUID="9983ef36-a557-4867-8d8f-a8f5d1b77eae" containerName="placement-log" containerID="cri-o://82d3e4c09c91ea5db6c7ee0188289ff2b3bca192627d1c1451b315d996b937f7" gracePeriod=30 Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.491558 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5dc5978b96-n87bc" podUID="9983ef36-a557-4867-8d8f-a8f5d1b77eae" containerName="placement-api" containerID="cri-o://9db4a3eec0c8e4ebd6f9beb6bc3a0893a1f8cd3fc932cd01bd3ef7171e3027ab" gracePeriod=30 Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.586851 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a47e2788-f585-4894-8e5b-e3b81fdafa60" path="/var/lib/kubelet/pods/a47e2788-f585-4894-8e5b-e3b81fdafa60/volumes" Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.587467 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beab3e33-962c-46f9-ac60-a8a739d86cac" path="/var/lib/kubelet/pods/beab3e33-962c-46f9-ac60-a8a739d86cac/volumes" Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.674468 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mz5w2" Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.825610 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpqv6\" (UniqueName: \"kubernetes.io/projected/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-kube-api-access-cpqv6\") pod \"8ffe9e93-b683-4326-b0dd-ec6eb798ab50\" (UID: \"8ffe9e93-b683-4326-b0dd-ec6eb798ab50\") " Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.825689 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-operator-scripts\") pod \"8ffe9e93-b683-4326-b0dd-ec6eb798ab50\" (UID: \"8ffe9e93-b683-4326-b0dd-ec6eb798ab50\") " Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.826637 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ffe9e93-b683-4326-b0dd-ec6eb798ab50" (UID: "8ffe9e93-b683-4326-b0dd-ec6eb798ab50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.859937 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-kube-api-access-cpqv6" (OuterVolumeSpecName: "kube-api-access-cpqv6") pod "8ffe9e93-b683-4326-b0dd-ec6eb798ab50" (UID: "8ffe9e93-b683-4326-b0dd-ec6eb798ab50"). InnerVolumeSpecName "kube-api-access-cpqv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.928234 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:37 crc kubenswrapper[4728]: I0204 11:47:37.928275 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpqv6\" (UniqueName: \"kubernetes.io/projected/8ffe9e93-b683-4326-b0dd-ec6eb798ab50-kube-api-access-cpqv6\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.140400 4728 generic.go:334] "Generic (PLEG): container finished" podID="9983ef36-a557-4867-8d8f-a8f5d1b77eae" containerID="82d3e4c09c91ea5db6c7ee0188289ff2b3bca192627d1c1451b315d996b937f7" exitCode=143 Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.140477 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dc5978b96-n87bc" event={"ID":"9983ef36-a557-4867-8d8f-a8f5d1b77eae","Type":"ContainerDied","Data":"82d3e4c09c91ea5db6c7ee0188289ff2b3bca192627d1c1451b315d996b937f7"} Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.143319 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mz5w2" event={"ID":"8ffe9e93-b683-4326-b0dd-ec6eb798ab50","Type":"ContainerDied","Data":"260510aa5b50c6f46676cc297720ce58e882a4fe24c30dea1b52ffae25b6ce18"} Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.143357 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="260510aa5b50c6f46676cc297720ce58e882a4fe24c30dea1b52ffae25b6ce18" Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.143363 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mz5w2" Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.146203 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="ceilometer-central-agent" containerID="cri-o://0b1a450e517be9022aaf55ecb7221bd13c8dba499d9aed422c40f42121a1612c" gracePeriod=30 Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.146540 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"730b4d31-76aa-48af-b5a5-44d29830cb54","Type":"ContainerStarted","Data":"71f4131fb4f41aec0a9dd220e5617066dfdbed0a93ff6980dcf64f906e0c3c72"} Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.146867 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="proxy-httpd" containerID="cri-o://71f4131fb4f41aec0a9dd220e5617066dfdbed0a93ff6980dcf64f906e0c3c72" gracePeriod=30 Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.146892 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="ceilometer-notification-agent" containerID="cri-o://b406825e6ac62fa8d801834249b7082f78ee826af7175056821582891aec862e" gracePeriod=30 Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.146936 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="sg-core" containerID="cri-o://8b0149ad79811bcc633bf91f6c7bd99c94b3b9ecd8b7d68032875501fcdff921" gracePeriod=30 Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.147200 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.194368 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=9.27463214 podStartE2EDuration="17.194343481s" podCreationTimestamp="2026-02-04 11:47:21 +0000 UTC" firstStartedPulling="2026-02-04 11:47:28.88858883 +0000 UTC m=+1198.031293215" lastFinishedPulling="2026-02-04 11:47:36.808300171 +0000 UTC m=+1205.951004556" observedRunningTime="2026-02-04 11:47:38.179846733 +0000 UTC m=+1207.322551118" watchObservedRunningTime="2026-02-04 11:47:38.194343481 +0000 UTC m=+1207.337047866" Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.897704 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7e37-account-create-update-svvcv" Feb 04 11:47:38 crc kubenswrapper[4728]: I0204 11:47:38.904377 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v7lw4" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.053866 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ed46b-54ee-4fe9-adca-5986b1befc95-operator-scripts\") pod \"c63ed46b-54ee-4fe9-adca-5986b1befc95\" (UID: \"c63ed46b-54ee-4fe9-adca-5986b1befc95\") " Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.053942 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfzzz\" (UniqueName: \"kubernetes.io/projected/c63ed46b-54ee-4fe9-adca-5986b1befc95-kube-api-access-qfzzz\") pod \"c63ed46b-54ee-4fe9-adca-5986b1befc95\" (UID: \"c63ed46b-54ee-4fe9-adca-5986b1befc95\") " Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.054012 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6bf70d2-1257-434e-9597-b4c98e4bb63b-operator-scripts\") pod \"e6bf70d2-1257-434e-9597-b4c98e4bb63b\" (UID: \"e6bf70d2-1257-434e-9597-b4c98e4bb63b\") " Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.054065 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxcmp\" (UniqueName: \"kubernetes.io/projected/e6bf70d2-1257-434e-9597-b4c98e4bb63b-kube-api-access-lxcmp\") pod \"e6bf70d2-1257-434e-9597-b4c98e4bb63b\" (UID: \"e6bf70d2-1257-434e-9597-b4c98e4bb63b\") " Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.054646 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c63ed46b-54ee-4fe9-adca-5986b1befc95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c63ed46b-54ee-4fe9-adca-5986b1befc95" (UID: "c63ed46b-54ee-4fe9-adca-5986b1befc95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.056184 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6bf70d2-1257-434e-9597-b4c98e4bb63b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6bf70d2-1257-434e-9597-b4c98e4bb63b" (UID: "e6bf70d2-1257-434e-9597-b4c98e4bb63b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.060221 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63ed46b-54ee-4fe9-adca-5986b1befc95-kube-api-access-qfzzz" (OuterVolumeSpecName: "kube-api-access-qfzzz") pod "c63ed46b-54ee-4fe9-adca-5986b1befc95" (UID: "c63ed46b-54ee-4fe9-adca-5986b1befc95"). InnerVolumeSpecName "kube-api-access-qfzzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.071917 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6bf70d2-1257-434e-9597-b4c98e4bb63b-kube-api-access-lxcmp" (OuterVolumeSpecName: "kube-api-access-lxcmp") pod "e6bf70d2-1257-434e-9597-b4c98e4bb63b" (UID: "e6bf70d2-1257-434e-9597-b4c98e4bb63b"). InnerVolumeSpecName "kube-api-access-lxcmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.119140 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.127843 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wggnf" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.141714 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ab4b-account-create-update-mk5gq" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.157395 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ed46b-54ee-4fe9-adca-5986b1befc95-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.157429 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfzzz\" (UniqueName: \"kubernetes.io/projected/c63ed46b-54ee-4fe9-adca-5986b1befc95-kube-api-access-qfzzz\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.157443 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6bf70d2-1257-434e-9597-b4c98e4bb63b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.157454 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxcmp\" (UniqueName: \"kubernetes.io/projected/e6bf70d2-1257-434e-9597-b4c98e4bb63b-kube-api-access-lxcmp\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.163543 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ab4b-account-create-update-mk5gq" event={"ID":"92c91a14-9080-4840-bc0e-9e6b103d9d01","Type":"ContainerDied","Data":"d2c8872aa97868eca72325a14c0b5858e1b6c9e9fd8a81bf978462df6263d1ef"} Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.163587 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2c8872aa97868eca72325a14c0b5858e1b6c9e9fd8a81bf978462df6263d1ef" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.163654 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ab4b-account-create-update-mk5gq" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.172235 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7e37-account-create-update-svvcv" event={"ID":"c63ed46b-54ee-4fe9-adca-5986b1befc95","Type":"ContainerDied","Data":"dab6ec6aedc0fcf881ba860239b24f444e09ab73a96b2c3e8e67e3b6551d5801"} Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.172284 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dab6ec6aedc0fcf881ba860239b24f444e09ab73a96b2c3e8e67e3b6551d5801" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.172360 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7e37-account-create-update-svvcv" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.193102 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wggnf" event={"ID":"e23d5efb-8f3f-40cf-992b-00aa2416f23b","Type":"ContainerDied","Data":"ea6a0e00476013b376e539a2ca0d870ab7c404e5a9cf91e40fed599d1732d4f8"} Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.193151 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea6a0e00476013b376e539a2ca0d870ab7c404e5a9cf91e40fed599d1732d4f8" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.193234 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wggnf" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.217808 4728 generic.go:334] "Generic (PLEG): container finished" podID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerID="71f4131fb4f41aec0a9dd220e5617066dfdbed0a93ff6980dcf64f906e0c3c72" exitCode=0 Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.217846 4728 generic.go:334] "Generic (PLEG): container finished" podID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerID="8b0149ad79811bcc633bf91f6c7bd99c94b3b9ecd8b7d68032875501fcdff921" exitCode=2 Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.217858 4728 generic.go:334] "Generic (PLEG): container finished" podID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerID="b406825e6ac62fa8d801834249b7082f78ee826af7175056821582891aec862e" exitCode=0 Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.217953 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"730b4d31-76aa-48af-b5a5-44d29830cb54","Type":"ContainerDied","Data":"71f4131fb4f41aec0a9dd220e5617066dfdbed0a93ff6980dcf64f906e0c3c72"} Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.217984 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"730b4d31-76aa-48af-b5a5-44d29830cb54","Type":"ContainerDied","Data":"8b0149ad79811bcc633bf91f6c7bd99c94b3b9ecd8b7d68032875501fcdff921"} Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.217997 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"730b4d31-76aa-48af-b5a5-44d29830cb54","Type":"ContainerDied","Data":"b406825e6ac62fa8d801834249b7082f78ee826af7175056821582891aec862e"} Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.223991 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" event={"ID":"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0","Type":"ContainerDied","Data":"ffebdf502ae1638e9f935b8ee0fb8ee21eca7ba410bcc4119d314cdf7476cfda"} Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.224033 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-81ae-account-create-update-c6p7w" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.224036 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffebdf502ae1638e9f935b8ee0fb8ee21eca7ba410bcc4119d314cdf7476cfda" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.229734 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v7lw4" event={"ID":"e6bf70d2-1257-434e-9597-b4c98e4bb63b","Type":"ContainerDied","Data":"c907416fffe4a074ab72892db73181a3dd392ec31ad5aa0198710c207e3b6aa3"} Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.229815 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c907416fffe4a074ab72892db73181a3dd392ec31ad5aa0198710c207e3b6aa3" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.229881 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v7lw4" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.258888 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47twp\" (UniqueName: \"kubernetes.io/projected/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-kube-api-access-47twp\") pod \"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0\" (UID: \"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0\") " Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.258974 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln66j\" (UniqueName: \"kubernetes.io/projected/92c91a14-9080-4840-bc0e-9e6b103d9d01-kube-api-access-ln66j\") pod \"92c91a14-9080-4840-bc0e-9e6b103d9d01\" (UID: \"92c91a14-9080-4840-bc0e-9e6b103d9d01\") " Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.259009 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-operator-scripts\") pod \"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0\" (UID: \"2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0\") " Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.259081 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e23d5efb-8f3f-40cf-992b-00aa2416f23b-operator-scripts\") pod \"e23d5efb-8f3f-40cf-992b-00aa2416f23b\" (UID: \"e23d5efb-8f3f-40cf-992b-00aa2416f23b\") " Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.259125 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v99jf\" (UniqueName: \"kubernetes.io/projected/e23d5efb-8f3f-40cf-992b-00aa2416f23b-kube-api-access-v99jf\") pod \"e23d5efb-8f3f-40cf-992b-00aa2416f23b\" (UID: \"e23d5efb-8f3f-40cf-992b-00aa2416f23b\") " Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.259200 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c91a14-9080-4840-bc0e-9e6b103d9d01-operator-scripts\") pod \"92c91a14-9080-4840-bc0e-9e6b103d9d01\" (UID: \"92c91a14-9080-4840-bc0e-9e6b103d9d01\") " Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.259719 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0" (UID: "2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.260050 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c91a14-9080-4840-bc0e-9e6b103d9d01-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92c91a14-9080-4840-bc0e-9e6b103d9d01" (UID: "92c91a14-9080-4840-bc0e-9e6b103d9d01"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.260387 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e23d5efb-8f3f-40cf-992b-00aa2416f23b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e23d5efb-8f3f-40cf-992b-00aa2416f23b" (UID: "e23d5efb-8f3f-40cf-992b-00aa2416f23b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.265041 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23d5efb-8f3f-40cf-992b-00aa2416f23b-kube-api-access-v99jf" (OuterVolumeSpecName: "kube-api-access-v99jf") pod "e23d5efb-8f3f-40cf-992b-00aa2416f23b" (UID: "e23d5efb-8f3f-40cf-992b-00aa2416f23b"). InnerVolumeSpecName "kube-api-access-v99jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.266623 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-kube-api-access-47twp" (OuterVolumeSpecName: "kube-api-access-47twp") pod "2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0" (UID: "2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0"). InnerVolumeSpecName "kube-api-access-47twp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.274740 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c91a14-9080-4840-bc0e-9e6b103d9d01-kube-api-access-ln66j" (OuterVolumeSpecName: "kube-api-access-ln66j") pod "92c91a14-9080-4840-bc0e-9e6b103d9d01" (UID: "92c91a14-9080-4840-bc0e-9e6b103d9d01"). InnerVolumeSpecName "kube-api-access-ln66j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.361155 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e23d5efb-8f3f-40cf-992b-00aa2416f23b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.361207 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v99jf\" (UniqueName: \"kubernetes.io/projected/e23d5efb-8f3f-40cf-992b-00aa2416f23b-kube-api-access-v99jf\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.361223 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c91a14-9080-4840-bc0e-9e6b103d9d01-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.361236 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47twp\" (UniqueName: \"kubernetes.io/projected/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-kube-api-access-47twp\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.361249 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln66j\" (UniqueName: \"kubernetes.io/projected/92c91a14-9080-4840-bc0e-9e6b103d9d01-kube-api-access-ln66j\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.361261 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:39 crc kubenswrapper[4728]: E0204 11:47:39.365951 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6bf70d2_1257_434e_9597_b4c98e4bb63b.slice\": RecentStats: unable to find data in memory cache]" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.667027 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.836464 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-546d7984c6-n6fdl" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.836515 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6bf54fd9cd-l9msv" Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.914989 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84d95cd6d8-jk9zt"] Feb 04 11:47:39 crc kubenswrapper[4728]: I0204 11:47:39.939817 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-647757f45c-fclw2"] Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.504416 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.513615 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.613651 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-combined-ca-bundle\") pod \"4a48c806-c596-4c79-8b6a-123a94b9f557\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.613732 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data-custom\") pod \"4a48c806-c596-4c79-8b6a-123a94b9f557\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.613793 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data\") pod \"235ea075-2aaf-4f43-a38a-83f118af4592\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.613866 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj2hc\" (UniqueName: \"kubernetes.io/projected/4a48c806-c596-4c79-8b6a-123a94b9f557-kube-api-access-lj2hc\") pod \"4a48c806-c596-4c79-8b6a-123a94b9f557\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.613894 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-combined-ca-bundle\") pod \"235ea075-2aaf-4f43-a38a-83f118af4592\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.613922 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data\") pod \"4a48c806-c596-4c79-8b6a-123a94b9f557\" (UID: \"4a48c806-c596-4c79-8b6a-123a94b9f557\") " Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.613973 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data-custom\") pod \"235ea075-2aaf-4f43-a38a-83f118af4592\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.614074 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgr4z\" (UniqueName: \"kubernetes.io/projected/235ea075-2aaf-4f43-a38a-83f118af4592-kube-api-access-tgr4z\") pod \"235ea075-2aaf-4f43-a38a-83f118af4592\" (UID: \"235ea075-2aaf-4f43-a38a-83f118af4592\") " Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.620968 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/235ea075-2aaf-4f43-a38a-83f118af4592-kube-api-access-tgr4z" (OuterVolumeSpecName: "kube-api-access-tgr4z") pod "235ea075-2aaf-4f43-a38a-83f118af4592" (UID: "235ea075-2aaf-4f43-a38a-83f118af4592"). InnerVolumeSpecName "kube-api-access-tgr4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.621660 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "235ea075-2aaf-4f43-a38a-83f118af4592" (UID: "235ea075-2aaf-4f43-a38a-83f118af4592"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.622630 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4a48c806-c596-4c79-8b6a-123a94b9f557" (UID: "4a48c806-c596-4c79-8b6a-123a94b9f557"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.627873 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a48c806-c596-4c79-8b6a-123a94b9f557-kube-api-access-lj2hc" (OuterVolumeSpecName: "kube-api-access-lj2hc") pod "4a48c806-c596-4c79-8b6a-123a94b9f557" (UID: "4a48c806-c596-4c79-8b6a-123a94b9f557"). InnerVolumeSpecName "kube-api-access-lj2hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.647573 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "235ea075-2aaf-4f43-a38a-83f118af4592" (UID: "235ea075-2aaf-4f43-a38a-83f118af4592"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.654749 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a48c806-c596-4c79-8b6a-123a94b9f557" (UID: "4a48c806-c596-4c79-8b6a-123a94b9f557"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.673861 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data" (OuterVolumeSpecName: "config-data") pod "4a48c806-c596-4c79-8b6a-123a94b9f557" (UID: "4a48c806-c596-4c79-8b6a-123a94b9f557"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.679023 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data" (OuterVolumeSpecName: "config-data") pod "235ea075-2aaf-4f43-a38a-83f118af4592" (UID: "235ea075-2aaf-4f43-a38a-83f118af4592"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.716110 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.716143 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.716153 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.716161 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj2hc\" (UniqueName: \"kubernetes.io/projected/4a48c806-c596-4c79-8b6a-123a94b9f557-kube-api-access-lj2hc\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.716212 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.716222 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a48c806-c596-4c79-8b6a-123a94b9f557-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.716230 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/235ea075-2aaf-4f43-a38a-83f118af4592-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:40 crc kubenswrapper[4728]: I0204 11:47:40.716243 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgr4z\" (UniqueName: \"kubernetes.io/projected/235ea075-2aaf-4f43-a38a-83f118af4592-kube-api-access-tgr4z\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.250965 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647757f45c-fclw2" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.250970 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647757f45c-fclw2" event={"ID":"4a48c806-c596-4c79-8b6a-123a94b9f557","Type":"ContainerDied","Data":"0a04fb87e03d0454a7581b7bbb099948eafa6f482417998c354ecfc713625624"} Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.251320 4728 scope.go:117] "RemoveContainer" containerID="71266a841fafafd6783c0c85ed0134dc2a42352ca5c047f1995c8b727e42c9c9" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.256521 4728 generic.go:334] "Generic (PLEG): container finished" podID="9983ef36-a557-4867-8d8f-a8f5d1b77eae" containerID="9db4a3eec0c8e4ebd6f9beb6bc3a0893a1f8cd3fc932cd01bd3ef7171e3027ab" exitCode=0 Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.256571 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dc5978b96-n87bc" event={"ID":"9983ef36-a557-4867-8d8f-a8f5d1b77eae","Type":"ContainerDied","Data":"9db4a3eec0c8e4ebd6f9beb6bc3a0893a1f8cd3fc932cd01bd3ef7171e3027ab"} Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.257968 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" event={"ID":"235ea075-2aaf-4f43-a38a-83f118af4592","Type":"ContainerDied","Data":"880d99eb3f4cfd052969b74d8e14eb8cad46b1091338653696099a5c23f29b5a"} Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.258032 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d95cd6d8-jk9zt" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.287145 4728 scope.go:117] "RemoveContainer" containerID="afede4fbbe825fcb9ded527ea707226a216f6002726bd09fabb83f9aee2acbbe" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.360531 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-647757f45c-fclw2"] Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.384298 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-647757f45c-fclw2"] Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.391555 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84d95cd6d8-jk9zt"] Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.395027 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.400452 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-84d95cd6d8-jk9zt"] Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.436794 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-public-tls-certs\") pod \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.436846 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-internal-tls-certs\") pod \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.436873 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9983ef36-a557-4867-8d8f-a8f5d1b77eae-logs\") pod \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.437474 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9983ef36-a557-4867-8d8f-a8f5d1b77eae-logs" (OuterVolumeSpecName: "logs") pod "9983ef36-a557-4867-8d8f-a8f5d1b77eae" (UID: "9983ef36-a557-4867-8d8f-a8f5d1b77eae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.538193 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fq5r\" (UniqueName: \"kubernetes.io/projected/9983ef36-a557-4867-8d8f-a8f5d1b77eae-kube-api-access-7fq5r\") pod \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.538268 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-combined-ca-bundle\") pod \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.538329 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-scripts\") pod \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.538402 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-config-data\") pod \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\" (UID: \"9983ef36-a557-4867-8d8f-a8f5d1b77eae\") " Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.538772 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9983ef36-a557-4867-8d8f-a8f5d1b77eae-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.542791 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9983ef36-a557-4867-8d8f-a8f5d1b77eae-kube-api-access-7fq5r" (OuterVolumeSpecName: "kube-api-access-7fq5r") pod "9983ef36-a557-4867-8d8f-a8f5d1b77eae" (UID: "9983ef36-a557-4867-8d8f-a8f5d1b77eae"). InnerVolumeSpecName "kube-api-access-7fq5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.543037 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-scripts" (OuterVolumeSpecName: "scripts") pod "9983ef36-a557-4867-8d8f-a8f5d1b77eae" (UID: "9983ef36-a557-4867-8d8f-a8f5d1b77eae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.565494 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9983ef36-a557-4867-8d8f-a8f5d1b77eae" (UID: "9983ef36-a557-4867-8d8f-a8f5d1b77eae"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.568488 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9983ef36-a557-4867-8d8f-a8f5d1b77eae" (UID: "9983ef36-a557-4867-8d8f-a8f5d1b77eae"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.568537 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235ea075-2aaf-4f43-a38a-83f118af4592" path="/var/lib/kubelet/pods/235ea075-2aaf-4f43-a38a-83f118af4592/volumes" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.569257 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a48c806-c596-4c79-8b6a-123a94b9f557" path="/var/lib/kubelet/pods/4a48c806-c596-4c79-8b6a-123a94b9f557/volumes" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.593900 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9983ef36-a557-4867-8d8f-a8f5d1b77eae" (UID: "9983ef36-a557-4867-8d8f-a8f5d1b77eae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.599021 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-config-data" (OuterVolumeSpecName: "config-data") pod "9983ef36-a557-4867-8d8f-a8f5d1b77eae" (UID: "9983ef36-a557-4867-8d8f-a8f5d1b77eae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.640087 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.640121 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.640129 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.640138 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.640172 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9983ef36-a557-4867-8d8f-a8f5d1b77eae-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:41 crc kubenswrapper[4728]: I0204 11:47:41.640182 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fq5r\" (UniqueName: \"kubernetes.io/projected/9983ef36-a557-4867-8d8f-a8f5d1b77eae-kube-api-access-7fq5r\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:42 crc kubenswrapper[4728]: I0204 11:47:42.270724 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dc5978b96-n87bc" event={"ID":"9983ef36-a557-4867-8d8f-a8f5d1b77eae","Type":"ContainerDied","Data":"5df8f9e4da0082b9367870687116e830deda4b3aa19b3bdd91df16e6b4f9cbfd"} Feb 04 11:47:42 crc kubenswrapper[4728]: I0204 11:47:42.270862 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dc5978b96-n87bc" Feb 04 11:47:42 crc kubenswrapper[4728]: I0204 11:47:42.270999 4728 scope.go:117] "RemoveContainer" containerID="9db4a3eec0c8e4ebd6f9beb6bc3a0893a1f8cd3fc932cd01bd3ef7171e3027ab" Feb 04 11:47:42 crc kubenswrapper[4728]: I0204 11:47:42.301290 4728 scope.go:117] "RemoveContainer" containerID="82d3e4c09c91ea5db6c7ee0188289ff2b3bca192627d1c1451b315d996b937f7" Feb 04 11:47:42 crc kubenswrapper[4728]: I0204 11:47:42.307268 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5dc5978b96-n87bc"] Feb 04 11:47:42 crc kubenswrapper[4728]: I0204 11:47:42.315182 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5dc5978b96-n87bc"] Feb 04 11:47:43 crc kubenswrapper[4728]: I0204 11:47:43.567126 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9983ef36-a557-4867-8d8f-a8f5d1b77eae" path="/var/lib/kubelet/pods/9983ef36-a557-4867-8d8f-a8f5d1b77eae/volumes" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305023 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j4npp"] Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305737 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23d5efb-8f3f-40cf-992b-00aa2416f23b" containerName="mariadb-database-create" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305778 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23d5efb-8f3f-40cf-992b-00aa2416f23b" containerName="mariadb-database-create" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305791 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c91a14-9080-4840-bc0e-9e6b103d9d01" containerName="mariadb-account-create-update" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305798 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c91a14-9080-4840-bc0e-9e6b103d9d01" containerName="mariadb-account-create-update" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305811 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bf70d2-1257-434e-9597-b4c98e4bb63b" containerName="mariadb-database-create" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305816 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bf70d2-1257-434e-9597-b4c98e4bb63b" containerName="mariadb-database-create" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305827 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9983ef36-a557-4867-8d8f-a8f5d1b77eae" containerName="placement-log" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305832 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9983ef36-a557-4867-8d8f-a8f5d1b77eae" containerName="placement-log" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305845 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63ed46b-54ee-4fe9-adca-5986b1befc95" containerName="mariadb-account-create-update" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305850 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63ed46b-54ee-4fe9-adca-5986b1befc95" containerName="mariadb-account-create-update" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305859 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beab3e33-962c-46f9-ac60-a8a739d86cac" containerName="neutron-api" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305866 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="beab3e33-962c-46f9-ac60-a8a739d86cac" containerName="neutron-api" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305875 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9983ef36-a557-4867-8d8f-a8f5d1b77eae" containerName="placement-api" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305881 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9983ef36-a557-4867-8d8f-a8f5d1b77eae" containerName="placement-api" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305892 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beab3e33-962c-46f9-ac60-a8a739d86cac" containerName="neutron-httpd" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305898 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="beab3e33-962c-46f9-ac60-a8a739d86cac" containerName="neutron-httpd" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305909 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a48c806-c596-4c79-8b6a-123a94b9f557" containerName="heat-api" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305916 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a48c806-c596-4c79-8b6a-123a94b9f557" containerName="heat-api" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305924 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235ea075-2aaf-4f43-a38a-83f118af4592" containerName="heat-cfnapi" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305930 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="235ea075-2aaf-4f43-a38a-83f118af4592" containerName="heat-cfnapi" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305944 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235ea075-2aaf-4f43-a38a-83f118af4592" containerName="heat-cfnapi" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305949 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="235ea075-2aaf-4f43-a38a-83f118af4592" containerName="heat-cfnapi" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305959 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47e2788-f585-4894-8e5b-e3b81fdafa60" containerName="init" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305964 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47e2788-f585-4894-8e5b-e3b81fdafa60" containerName="init" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.305978 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffe9e93-b683-4326-b0dd-ec6eb798ab50" containerName="mariadb-database-create" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.305986 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffe9e93-b683-4326-b0dd-ec6eb798ab50" containerName="mariadb-database-create" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.306002 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a48c806-c596-4c79-8b6a-123a94b9f557" containerName="heat-api" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306010 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a48c806-c596-4c79-8b6a-123a94b9f557" containerName="heat-api" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.306017 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0" containerName="mariadb-account-create-update" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306024 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0" containerName="mariadb-account-create-update" Feb 04 11:47:44 crc kubenswrapper[4728]: E0204 11:47:44.306033 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47e2788-f585-4894-8e5b-e3b81fdafa60" containerName="dnsmasq-dns" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306040 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47e2788-f585-4894-8e5b-e3b81fdafa60" containerName="dnsmasq-dns" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306222 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="235ea075-2aaf-4f43-a38a-83f118af4592" containerName="heat-cfnapi" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306238 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63ed46b-54ee-4fe9-adca-5986b1befc95" containerName="mariadb-account-create-update" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306251 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47e2788-f585-4894-8e5b-e3b81fdafa60" containerName="dnsmasq-dns" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306265 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ffe9e93-b683-4326-b0dd-ec6eb798ab50" containerName="mariadb-database-create" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306279 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="beab3e33-962c-46f9-ac60-a8a739d86cac" containerName="neutron-httpd" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306289 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9983ef36-a557-4867-8d8f-a8f5d1b77eae" containerName="placement-api" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306299 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="beab3e33-962c-46f9-ac60-a8a739d86cac" containerName="neutron-api" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306309 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a48c806-c596-4c79-8b6a-123a94b9f557" containerName="heat-api" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306320 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a48c806-c596-4c79-8b6a-123a94b9f557" containerName="heat-api" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306326 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23d5efb-8f3f-40cf-992b-00aa2416f23b" containerName="mariadb-database-create" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306338 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6bf70d2-1257-434e-9597-b4c98e4bb63b" containerName="mariadb-database-create" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306346 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c91a14-9080-4840-bc0e-9e6b103d9d01" containerName="mariadb-account-create-update" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306356 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0" containerName="mariadb-account-create-update" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306367 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9983ef36-a557-4867-8d8f-a8f5d1b77eae" containerName="placement-log" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.306929 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.308904 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.310599 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4dtz9" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.314217 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.318473 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j4npp"] Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.489840 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cf7v\" (UniqueName: \"kubernetes.io/projected/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-kube-api-access-7cf7v\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.490167 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.490303 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-config-data\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.490474 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-scripts\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.591982 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-scripts\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.592116 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cf7v\" (UniqueName: \"kubernetes.io/projected/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-kube-api-access-7cf7v\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.592144 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.592173 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-config-data\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.598493 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-config-data\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.598810 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.598830 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-scripts\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.609653 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cf7v\" (UniqueName: \"kubernetes.io/projected/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-kube-api-access-7cf7v\") pod \"nova-cell0-conductor-db-sync-j4npp\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:44 crc kubenswrapper[4728]: I0204 11:47:44.625926 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:47:45 crc kubenswrapper[4728]: I0204 11:47:45.151996 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j4npp"] Feb 04 11:47:45 crc kubenswrapper[4728]: W0204 11:47:45.156877 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8b00c45_cc7b_4de0_876c_9c1ea1bb1f71.slice/crio-60aceed8e105d9fdc17394fe6ca8c97fecf860b41ebfc1ef9909839e87f0f09e WatchSource:0}: Error finding container 60aceed8e105d9fdc17394fe6ca8c97fecf860b41ebfc1ef9909839e87f0f09e: Status 404 returned error can't find the container with id 60aceed8e105d9fdc17394fe6ca8c97fecf860b41ebfc1ef9909839e87f0f09e Feb 04 11:47:45 crc kubenswrapper[4728]: I0204 11:47:45.300516 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j4npp" event={"ID":"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71","Type":"ContainerStarted","Data":"60aceed8e105d9fdc17394fe6ca8c97fecf860b41ebfc1ef9909839e87f0f09e"} Feb 04 11:47:45 crc kubenswrapper[4728]: I0204 11:47:45.910138 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7bd84699b9-9ldwf" Feb 04 11:47:45 crc kubenswrapper[4728]: I0204 11:47:45.956348 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7cd578fb67-gp57x"] Feb 04 11:47:45 crc kubenswrapper[4728]: I0204 11:47:45.956557 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7cd578fb67-gp57x" podUID="ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6" containerName="heat-engine" containerID="cri-o://d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059" gracePeriod=60 Feb 04 11:47:49 crc kubenswrapper[4728]: I0204 11:47:49.352334 4728 generic.go:334] "Generic (PLEG): container finished" podID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerID="0b1a450e517be9022aaf55ecb7221bd13c8dba499d9aed422c40f42121a1612c" exitCode=0 Feb 04 11:47:49 crc kubenswrapper[4728]: I0204 11:47:49.352453 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"730b4d31-76aa-48af-b5a5-44d29830cb54","Type":"ContainerDied","Data":"0b1a450e517be9022aaf55ecb7221bd13c8dba499d9aed422c40f42121a1612c"} Feb 04 11:47:49 crc kubenswrapper[4728]: E0204 11:47:49.624250 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 04 11:47:49 crc kubenswrapper[4728]: E0204 11:47:49.627197 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 04 11:47:49 crc kubenswrapper[4728]: E0204 11:47:49.628662 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 04 11:47:49 crc kubenswrapper[4728]: E0204 11:47:49.628711 4728 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7cd578fb67-gp57x" podUID="ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6" containerName="heat-engine" Feb 04 11:47:51 crc kubenswrapper[4728]: I0204 11:47:51.761637 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.176:3000/\": dial tcp 10.217.0.176:3000: connect: connection refused" Feb 04 11:47:52 crc kubenswrapper[4728]: I0204 11:47:52.673148 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:47:52 crc kubenswrapper[4728]: I0204 11:47:52.673621 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" containerName="glance-log" containerID="cri-o://006412edd580c08a1199a125e3d71dba305327732d47a37b5785b686f562634b" gracePeriod=30 Feb 04 11:47:52 crc kubenswrapper[4728]: I0204 11:47:52.673686 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" containerName="glance-httpd" containerID="cri-o://304804edb116bd86772664ca52ea88307a40cbbf866b97b0dffa97e6fecbd3ea" gracePeriod=30 Feb 04 11:47:53 crc kubenswrapper[4728]: I0204 11:47:53.391472 4728 generic.go:334] "Generic (PLEG): container finished" podID="ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" containerID="006412edd580c08a1199a125e3d71dba305327732d47a37b5785b686f562634b" exitCode=143 Feb 04 11:47:53 crc kubenswrapper[4728]: I0204 11:47:53.391517 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d","Type":"ContainerDied","Data":"006412edd580c08a1199a125e3d71dba305327732d47a37b5785b686f562634b"} Feb 04 11:47:53 crc kubenswrapper[4728]: I0204 11:47:53.726356 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:47:53 crc kubenswrapper[4728]: I0204 11:47:53.728861 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd18bbb4-d813-4688-ad80-574154978db4" containerName="glance-log" containerID="cri-o://6d040dba62774b2756aa4d5ffd2d481fcb23e48f2773d7846d6ec0893a05317a" gracePeriod=30 Feb 04 11:47:53 crc kubenswrapper[4728]: I0204 11:47:53.729425 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd18bbb4-d813-4688-ad80-574154978db4" containerName="glance-httpd" containerID="cri-o://29d18c750afd6b4c043a6698b36057d2034dc163a11f7d6fe752bef2b3b52f80" gracePeriod=30 Feb 04 11:47:53 crc kubenswrapper[4728]: I0204 11:47:53.972023 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.076953 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-combined-ca-bundle\") pod \"730b4d31-76aa-48af-b5a5-44d29830cb54\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.077053 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-log-httpd\") pod \"730b4d31-76aa-48af-b5a5-44d29830cb54\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.077155 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-config-data\") pod \"730b4d31-76aa-48af-b5a5-44d29830cb54\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.077206 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jcp4\" (UniqueName: \"kubernetes.io/projected/730b4d31-76aa-48af-b5a5-44d29830cb54-kube-api-access-7jcp4\") pod \"730b4d31-76aa-48af-b5a5-44d29830cb54\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.077231 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-scripts\") pod \"730b4d31-76aa-48af-b5a5-44d29830cb54\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.077276 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-sg-core-conf-yaml\") pod \"730b4d31-76aa-48af-b5a5-44d29830cb54\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.077343 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-run-httpd\") pod \"730b4d31-76aa-48af-b5a5-44d29830cb54\" (UID: \"730b4d31-76aa-48af-b5a5-44d29830cb54\") " Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.078023 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "730b4d31-76aa-48af-b5a5-44d29830cb54" (UID: "730b4d31-76aa-48af-b5a5-44d29830cb54"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.078064 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "730b4d31-76aa-48af-b5a5-44d29830cb54" (UID: "730b4d31-76aa-48af-b5a5-44d29830cb54"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.083145 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730b4d31-76aa-48af-b5a5-44d29830cb54-kube-api-access-7jcp4" (OuterVolumeSpecName: "kube-api-access-7jcp4") pod "730b4d31-76aa-48af-b5a5-44d29830cb54" (UID: "730b4d31-76aa-48af-b5a5-44d29830cb54"). InnerVolumeSpecName "kube-api-access-7jcp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.090864 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-scripts" (OuterVolumeSpecName: "scripts") pod "730b4d31-76aa-48af-b5a5-44d29830cb54" (UID: "730b4d31-76aa-48af-b5a5-44d29830cb54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.124884 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "730b4d31-76aa-48af-b5a5-44d29830cb54" (UID: "730b4d31-76aa-48af-b5a5-44d29830cb54"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.180348 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jcp4\" (UniqueName: \"kubernetes.io/projected/730b4d31-76aa-48af-b5a5-44d29830cb54-kube-api-access-7jcp4\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.180389 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.180429 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.180439 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.180449 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/730b4d31-76aa-48af-b5a5-44d29830cb54-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.183864 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "730b4d31-76aa-48af-b5a5-44d29830cb54" (UID: "730b4d31-76aa-48af-b5a5-44d29830cb54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.215612 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-config-data" (OuterVolumeSpecName: "config-data") pod "730b4d31-76aa-48af-b5a5-44d29830cb54" (UID: "730b4d31-76aa-48af-b5a5-44d29830cb54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.282444 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.282496 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730b4d31-76aa-48af-b5a5-44d29830cb54-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.401204 4728 generic.go:334] "Generic (PLEG): container finished" podID="fd18bbb4-d813-4688-ad80-574154978db4" containerID="6d040dba62774b2756aa4d5ffd2d481fcb23e48f2773d7846d6ec0893a05317a" exitCode=143 Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.401301 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd18bbb4-d813-4688-ad80-574154978db4","Type":"ContainerDied","Data":"6d040dba62774b2756aa4d5ffd2d481fcb23e48f2773d7846d6ec0893a05317a"} Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.402964 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j4npp" event={"ID":"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71","Type":"ContainerStarted","Data":"c65cd0dfe0e3f33d8c1c82ddaba005e76585e312197cc1e78c077290dde8e154"} Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.405638 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"730b4d31-76aa-48af-b5a5-44d29830cb54","Type":"ContainerDied","Data":"c6a9be9ee24b761f9027f96a3e3c9d52ff5877727577357944309272aa8cf7ec"} Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.405713 4728 scope.go:117] "RemoveContainer" containerID="71f4131fb4f41aec0a9dd220e5617066dfdbed0a93ff6980dcf64f906e0c3c72" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.405665 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.423920 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-j4npp" podStartSLOduration=1.8993212929999999 podStartE2EDuration="10.423903174s" podCreationTimestamp="2026-02-04 11:47:44 +0000 UTC" firstStartedPulling="2026-02-04 11:47:45.158593582 +0000 UTC m=+1214.301297967" lastFinishedPulling="2026-02-04 11:47:53.683175463 +0000 UTC m=+1222.825879848" observedRunningTime="2026-02-04 11:47:54.420940683 +0000 UTC m=+1223.563645068" watchObservedRunningTime="2026-02-04 11:47:54.423903174 +0000 UTC m=+1223.566607559" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.428315 4728 scope.go:117] "RemoveContainer" containerID="8b0149ad79811bcc633bf91f6c7bd99c94b3b9ecd8b7d68032875501fcdff921" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.456544 4728 scope.go:117] "RemoveContainer" containerID="b406825e6ac62fa8d801834249b7082f78ee826af7175056821582891aec862e" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.457836 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.466858 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.501010 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:54 crc kubenswrapper[4728]: E0204 11:47:54.501590 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="ceilometer-central-agent" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.501611 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="ceilometer-central-agent" Feb 04 11:47:54 crc kubenswrapper[4728]: E0204 11:47:54.501634 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="sg-core" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.501641 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="sg-core" Feb 04 11:47:54 crc kubenswrapper[4728]: E0204 11:47:54.501674 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="proxy-httpd" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.501680 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="proxy-httpd" Feb 04 11:47:54 crc kubenswrapper[4728]: E0204 11:47:54.501695 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="ceilometer-notification-agent" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.501701 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="ceilometer-notification-agent" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.502328 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="ceilometer-notification-agent" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.502353 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="proxy-httpd" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.502367 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="sg-core" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.502396 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="235ea075-2aaf-4f43-a38a-83f118af4592" containerName="heat-cfnapi" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.502414 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" containerName="ceilometer-central-agent" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.505224 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.507418 4728 scope.go:117] "RemoveContainer" containerID="0b1a450e517be9022aaf55ecb7221bd13c8dba499d9aed422c40f42121a1612c" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.511221 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.511394 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.552215 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.591678 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-scripts\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.591745 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-config-data\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.591981 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-log-httpd\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.592012 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksg7z\" (UniqueName: \"kubernetes.io/projected/c83fa078-fb92-48af-89bf-eacec73c21ec-kube-api-access-ksg7z\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.592029 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.592080 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.592098 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-run-httpd\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.694293 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-scripts\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.694404 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-config-data\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.694443 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-log-httpd\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.694466 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksg7z\" (UniqueName: \"kubernetes.io/projected/c83fa078-fb92-48af-89bf-eacec73c21ec-kube-api-access-ksg7z\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.694502 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.694568 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.694586 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-run-httpd\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.696893 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-log-httpd\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.696888 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-run-httpd\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.700499 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.701315 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-scripts\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.708233 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-config-data\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.713531 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.716949 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksg7z\" (UniqueName: \"kubernetes.io/projected/c83fa078-fb92-48af-89bf-eacec73c21ec-kube-api-access-ksg7z\") pod \"ceilometer-0\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " pod="openstack/ceilometer-0" Feb 04 11:47:54 crc kubenswrapper[4728]: I0204 11:47:54.907601 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:47:55 crc kubenswrapper[4728]: I0204 11:47:55.003164 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:55 crc kubenswrapper[4728]: I0204 11:47:55.395593 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:47:55 crc kubenswrapper[4728]: W0204 11:47:55.397868 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc83fa078_fb92_48af_89bf_eacec73c21ec.slice/crio-0cbc701af109ee248de0a42bd6abd9a2dab8056429809bd02f322c65e7eccb65 WatchSource:0}: Error finding container 0cbc701af109ee248de0a42bd6abd9a2dab8056429809bd02f322c65e7eccb65: Status 404 returned error can't find the container with id 0cbc701af109ee248de0a42bd6abd9a2dab8056429809bd02f322c65e7eccb65 Feb 04 11:47:55 crc kubenswrapper[4728]: I0204 11:47:55.415856 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c83fa078-fb92-48af-89bf-eacec73c21ec","Type":"ContainerStarted","Data":"0cbc701af109ee248de0a42bd6abd9a2dab8056429809bd02f322c65e7eccb65"} Feb 04 11:47:55 crc kubenswrapper[4728]: I0204 11:47:55.564413 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="730b4d31-76aa-48af-b5a5-44d29830cb54" path="/var/lib/kubelet/pods/730b4d31-76aa-48af-b5a5-44d29830cb54/volumes" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.428686 4728 generic.go:334] "Generic (PLEG): container finished" podID="ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" containerID="304804edb116bd86772664ca52ea88307a40cbbf866b97b0dffa97e6fecbd3ea" exitCode=0 Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.428871 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d","Type":"ContainerDied","Data":"304804edb116bd86772664ca52ea88307a40cbbf866b97b0dffa97e6fecbd3ea"} Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.432576 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c83fa078-fb92-48af-89bf-eacec73c21ec","Type":"ContainerStarted","Data":"1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907"} Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.711093 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.835023 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.835337 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-combined-ca-bundle\") pod \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.835440 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-logs\") pod \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.835467 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-config-data\") pod \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.835513 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6r7b\" (UniqueName: \"kubernetes.io/projected/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-kube-api-access-z6r7b\") pod \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.835608 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-httpd-run\") pod \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.835697 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-public-tls-certs\") pod \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.835773 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-scripts\") pod \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\" (UID: \"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d\") " Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.836067 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" (UID: "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.836609 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.836645 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-logs" (OuterVolumeSpecName: "logs") pod "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" (UID: "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.839648 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" (UID: "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.847672 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-scripts" (OuterVolumeSpecName: "scripts") pod "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" (UID: "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.851089 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-kube-api-access-z6r7b" (OuterVolumeSpecName: "kube-api-access-z6r7b") pod "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" (UID: "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d"). InnerVolumeSpecName "kube-api-access-z6r7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.913631 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-config-data" (OuterVolumeSpecName: "config-data") pod "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" (UID: "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.914982 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" (UID: "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.920819 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" (UID: "ea257ab3-f8f8-4546-a1e1-f5af6b1d857d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.937982 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.938208 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.938294 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.938367 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.938430 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.938505 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.938641 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6r7b\" (UniqueName: \"kubernetes.io/projected/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d-kube-api-access-z6r7b\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:56 crc kubenswrapper[4728]: I0204 11:47:56.959132 4728 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.040853 4728 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.442691 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea257ab3-f8f8-4546-a1e1-f5af6b1d857d","Type":"ContainerDied","Data":"9913d1a0dd20ae9df333e9fca2b423373adb1f2e2f5d81f9b8aab772cb4f07b8"} Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.442707 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.443086 4728 scope.go:117] "RemoveContainer" containerID="304804edb116bd86772664ca52ea88307a40cbbf866b97b0dffa97e6fecbd3ea" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.450969 4728 generic.go:334] "Generic (PLEG): container finished" podID="fd18bbb4-d813-4688-ad80-574154978db4" containerID="29d18c750afd6b4c043a6698b36057d2034dc163a11f7d6fe752bef2b3b52f80" exitCode=0 Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.451099 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd18bbb4-d813-4688-ad80-574154978db4","Type":"ContainerDied","Data":"29d18c750afd6b4c043a6698b36057d2034dc163a11f7d6fe752bef2b3b52f80"} Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.457791 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c83fa078-fb92-48af-89bf-eacec73c21ec","Type":"ContainerStarted","Data":"94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb"} Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.466489 4728 scope.go:117] "RemoveContainer" containerID="006412edd580c08a1199a125e3d71dba305327732d47a37b5785b686f562634b" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.500471 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.518858 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.529901 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:47:57 crc kubenswrapper[4728]: E0204 11:47:57.530321 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" containerName="glance-httpd" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.530336 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" containerName="glance-httpd" Feb 04 11:47:57 crc kubenswrapper[4728]: E0204 11:47:57.530415 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" containerName="glance-log" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.530424 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" containerName="glance-log" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.530634 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" containerName="glance-log" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.530653 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" containerName="glance-httpd" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.531740 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.537194 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.542966 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.566520 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea257ab3-f8f8-4546-a1e1-f5af6b1d857d" path="/var/lib/kubelet/pods/ea257ab3-f8f8-4546-a1e1-f5af6b1d857d/volumes" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.580145 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.653950 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.654007 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.654032 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-scripts\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.654193 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9e7acb6-6488-4369-9c08-f3843af8169c-logs\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.654216 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.654234 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8vzh\" (UniqueName: \"kubernetes.io/projected/d9e7acb6-6488-4369-9c08-f3843af8169c-kube-api-access-g8vzh\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.654291 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-config-data\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.654363 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d9e7acb6-6488-4369-9c08-f3843af8169c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.773827 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.774076 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.774110 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-scripts\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.774168 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9e7acb6-6488-4369-9c08-f3843af8169c-logs\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.774188 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.774205 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8vzh\" (UniqueName: \"kubernetes.io/projected/d9e7acb6-6488-4369-9c08-f3843af8169c-kube-api-access-g8vzh\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.774247 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-config-data\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.774291 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d9e7acb6-6488-4369-9c08-f3843af8169c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.774642 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d9e7acb6-6488-4369-9c08-f3843af8169c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.774885 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.775016 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9e7acb6-6488-4369-9c08-f3843af8169c-logs\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.788695 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.793701 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-scripts\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.794277 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-config-data\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.795310 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e7acb6-6488-4369-9c08-f3843af8169c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.799307 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8vzh\" (UniqueName: \"kubernetes.io/projected/d9e7acb6-6488-4369-9c08-f3843af8169c-kube-api-access-g8vzh\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.818371 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d9e7acb6-6488-4369-9c08-f3843af8169c\") " pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.896938 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.956041 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.979318 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-config-data\") pod \"fd18bbb4-d813-4688-ad80-574154978db4\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.979387 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-combined-ca-bundle\") pod \"fd18bbb4-d813-4688-ad80-574154978db4\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.979523 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgjmq\" (UniqueName: \"kubernetes.io/projected/fd18bbb4-d813-4688-ad80-574154978db4-kube-api-access-zgjmq\") pod \"fd18bbb4-d813-4688-ad80-574154978db4\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.979567 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"fd18bbb4-d813-4688-ad80-574154978db4\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.979624 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-scripts\") pod \"fd18bbb4-d813-4688-ad80-574154978db4\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.979663 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-logs\") pod \"fd18bbb4-d813-4688-ad80-574154978db4\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.979687 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-internal-tls-certs\") pod \"fd18bbb4-d813-4688-ad80-574154978db4\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.979818 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-httpd-run\") pod \"fd18bbb4-d813-4688-ad80-574154978db4\" (UID: \"fd18bbb4-d813-4688-ad80-574154978db4\") " Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.980978 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fd18bbb4-d813-4688-ad80-574154978db4" (UID: "fd18bbb4-d813-4688-ad80-574154978db4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.981789 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-logs" (OuterVolumeSpecName: "logs") pod "fd18bbb4-d813-4688-ad80-574154978db4" (UID: "fd18bbb4-d813-4688-ad80-574154978db4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.991248 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "fd18bbb4-d813-4688-ad80-574154978db4" (UID: "fd18bbb4-d813-4688-ad80-574154978db4"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.992237 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-scripts" (OuterVolumeSpecName: "scripts") pod "fd18bbb4-d813-4688-ad80-574154978db4" (UID: "fd18bbb4-d813-4688-ad80-574154978db4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:57 crc kubenswrapper[4728]: I0204 11:47:57.995329 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd18bbb4-d813-4688-ad80-574154978db4-kube-api-access-zgjmq" (OuterVolumeSpecName: "kube-api-access-zgjmq") pod "fd18bbb4-d813-4688-ad80-574154978db4" (UID: "fd18bbb4-d813-4688-ad80-574154978db4"). InnerVolumeSpecName "kube-api-access-zgjmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.032926 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd18bbb4-d813-4688-ad80-574154978db4" (UID: "fd18bbb4-d813-4688-ad80-574154978db4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.082778 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fd18bbb4-d813-4688-ad80-574154978db4" (UID: "fd18bbb4-d813-4688-ad80-574154978db4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.082888 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.082912 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgjmq\" (UniqueName: \"kubernetes.io/projected/fd18bbb4-d813-4688-ad80-574154978db4-kube-api-access-zgjmq\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.082954 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.082966 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.082979 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.082990 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd18bbb4-d813-4688-ad80-574154978db4-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.106947 4728 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.122610 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-config-data" (OuterVolumeSpecName: "config-data") pod "fd18bbb4-d813-4688-ad80-574154978db4" (UID: "fd18bbb4-d813-4688-ad80-574154978db4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.184990 4728 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.185324 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.185338 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd18bbb4-d813-4688-ad80-574154978db4-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.470215 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd18bbb4-d813-4688-ad80-574154978db4","Type":"ContainerDied","Data":"651cd5061b3f4590899b7ccfe7b0228753d1c80c090b4e5bc5075c4db381db1f"} Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.470273 4728 scope.go:117] "RemoveContainer" containerID="29d18c750afd6b4c043a6698b36057d2034dc163a11f7d6fe752bef2b3b52f80" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.470406 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.489219 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c83fa078-fb92-48af-89bf-eacec73c21ec","Type":"ContainerStarted","Data":"2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74"} Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.545385 4728 scope.go:117] "RemoveContainer" containerID="6d040dba62774b2756aa4d5ffd2d481fcb23e48f2773d7846d6ec0893a05317a" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.548006 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.593260 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.638508 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:47:58 crc kubenswrapper[4728]: E0204 11:47:58.639097 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd18bbb4-d813-4688-ad80-574154978db4" containerName="glance-httpd" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.639119 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd18bbb4-d813-4688-ad80-574154978db4" containerName="glance-httpd" Feb 04 11:47:58 crc kubenswrapper[4728]: E0204 11:47:58.639150 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd18bbb4-d813-4688-ad80-574154978db4" containerName="glance-log" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.639161 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd18bbb4-d813-4688-ad80-574154978db4" containerName="glance-log" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.639382 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd18bbb4-d813-4688-ad80-574154978db4" containerName="glance-log" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.639401 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd18bbb4-d813-4688-ad80-574154978db4" containerName="glance-httpd" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.644537 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.650082 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.650137 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.660872 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.688429 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.706136 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.706198 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.706268 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.706315 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.706439 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8hb\" (UniqueName: \"kubernetes.io/projected/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-kube-api-access-2p8hb\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.717978 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.718604 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.718810 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.822919 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.822997 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.823017 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.823047 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.823070 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.823114 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p8hb\" (UniqueName: \"kubernetes.io/projected/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-kube-api-access-2p8hb\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.823135 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.823151 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.824655 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.824665 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.824942 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.830520 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.834699 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.834713 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.837304 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.849158 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p8hb\" (UniqueName: \"kubernetes.io/projected/6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a-kube-api-access-2p8hb\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:58 crc kubenswrapper[4728]: I0204 11:47:58.857094 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a\") " pod="openstack/glance-default-internal-api-0" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.109902 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.392172 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.433877 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data-custom\") pod \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.433919 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-combined-ca-bundle\") pod \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.433951 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcn8f\" (UniqueName: \"kubernetes.io/projected/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-kube-api-access-hcn8f\") pod \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.434005 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data\") pod \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\" (UID: \"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6\") " Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.439997 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6" (UID: "ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.442900 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-kube-api-access-hcn8f" (OuterVolumeSpecName: "kube-api-access-hcn8f") pod "ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6" (UID: "ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6"). InnerVolumeSpecName "kube-api-access-hcn8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.500242 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6" (UID: "ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.503087 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d9e7acb6-6488-4369-9c08-f3843af8169c","Type":"ContainerStarted","Data":"d6cc765bec9e24c9ac3cf2531df41edf9f0ec081f9d530b36ae9d6816d9fc10a"} Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.511499 4728 generic.go:334] "Generic (PLEG): container finished" podID="ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6" containerID="d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059" exitCode=0 Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.511543 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cd578fb67-gp57x" event={"ID":"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6","Type":"ContainerDied","Data":"d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059"} Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.511568 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7cd578fb67-gp57x" event={"ID":"ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6","Type":"ContainerDied","Data":"5356fead71eb0ba1ec63d96598a5402907695f1379c4014e6952dd3aca5f1d92"} Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.511584 4728 scope.go:117] "RemoveContainer" containerID="d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.511687 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7cd578fb67-gp57x" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.537687 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.537721 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.537733 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcn8f\" (UniqueName: \"kubernetes.io/projected/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-kube-api-access-hcn8f\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.540957 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data" (OuterVolumeSpecName: "config-data") pod "ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6" (UID: "ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.570162 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd18bbb4-d813-4688-ad80-574154978db4" path="/var/lib/kubelet/pods/fd18bbb4-d813-4688-ad80-574154978db4/volumes" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.639994 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.692012 4728 scope.go:117] "RemoveContainer" containerID="d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059" Feb 04 11:47:59 crc kubenswrapper[4728]: E0204 11:47:59.694517 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059\": container with ID starting with d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059 not found: ID does not exist" containerID="d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.694573 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059"} err="failed to get container status \"d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059\": rpc error: code = NotFound desc = could not find container \"d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059\": container with ID starting with d9136d850140603ee63806d74ed36be0e222e5c6337b331c812d8e7db28f7059 not found: ID does not exist" Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.836524 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.858162 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7cd578fb67-gp57x"] Feb 04 11:47:59 crc kubenswrapper[4728]: I0204 11:47:59.870916 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7cd578fb67-gp57x"] Feb 04 11:47:59 crc kubenswrapper[4728]: E0204 11:47:59.884847 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddd78fa4_d1bb_44b2_b7a5_08eb43c93cf6.slice/crio-5356fead71eb0ba1ec63d96598a5402907695f1379c4014e6952dd3aca5f1d92\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddd78fa4_d1bb_44b2_b7a5_08eb43c93cf6.slice\": RecentStats: unable to find data in memory cache]" Feb 04 11:48:00 crc kubenswrapper[4728]: I0204 11:48:00.533479 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a","Type":"ContainerStarted","Data":"814eb03fe149b1665faa758ed82f874d101729c31cb0800dffa35a4abd6b2b42"} Feb 04 11:48:00 crc kubenswrapper[4728]: I0204 11:48:00.539371 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d9e7acb6-6488-4369-9c08-f3843af8169c","Type":"ContainerStarted","Data":"4593bc3dfbfdfa79ab1250c1e7f8c3209c0cd8856b9fe3981756ec9a3ed6d54e"} Feb 04 11:48:00 crc kubenswrapper[4728]: I0204 11:48:00.539421 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d9e7acb6-6488-4369-9c08-f3843af8169c","Type":"ContainerStarted","Data":"859bcb02de0a58290b172d96d6de2c61dfb177823b1f4a75d83fe31794495cad"} Feb 04 11:48:00 crc kubenswrapper[4728]: I0204 11:48:00.545942 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c83fa078-fb92-48af-89bf-eacec73c21ec","Type":"ContainerStarted","Data":"4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037"} Feb 04 11:48:00 crc kubenswrapper[4728]: I0204 11:48:00.546257 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 11:48:00 crc kubenswrapper[4728]: I0204 11:48:00.546236 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="ceilometer-central-agent" containerID="cri-o://1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907" gracePeriod=30 Feb 04 11:48:00 crc kubenswrapper[4728]: I0204 11:48:00.546297 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="sg-core" containerID="cri-o://2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74" gracePeriod=30 Feb 04 11:48:00 crc kubenswrapper[4728]: I0204 11:48:00.546345 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="proxy-httpd" containerID="cri-o://4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037" gracePeriod=30 Feb 04 11:48:00 crc kubenswrapper[4728]: I0204 11:48:00.546376 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="ceilometer-notification-agent" containerID="cri-o://94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb" gracePeriod=30 Feb 04 11:48:00 crc kubenswrapper[4728]: I0204 11:48:00.585345 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.585323852 podStartE2EDuration="3.585323852s" podCreationTimestamp="2026-02-04 11:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:00.576058634 +0000 UTC m=+1229.718763049" watchObservedRunningTime="2026-02-04 11:48:00.585323852 +0000 UTC m=+1229.728028237" Feb 04 11:48:00 crc kubenswrapper[4728]: I0204 11:48:00.625173 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.442954822 podStartE2EDuration="6.625126474s" podCreationTimestamp="2026-02-04 11:47:54 +0000 UTC" firstStartedPulling="2026-02-04 11:47:55.400707823 +0000 UTC m=+1224.543412208" lastFinishedPulling="2026-02-04 11:47:59.582879475 +0000 UTC m=+1228.725583860" observedRunningTime="2026-02-04 11:48:00.609168237 +0000 UTC m=+1229.751872652" watchObservedRunningTime="2026-02-04 11:48:00.625126474 +0000 UTC m=+1229.767830859" Feb 04 11:48:01 crc kubenswrapper[4728]: I0204 11:48:01.570134 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6" path="/var/lib/kubelet/pods/ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6/volumes" Feb 04 11:48:01 crc kubenswrapper[4728]: I0204 11:48:01.577784 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a","Type":"ContainerStarted","Data":"ace7971ae55842c471a16c57d1fadc73931dab101b8bc3403cf1c18754211b10"} Feb 04 11:48:01 crc kubenswrapper[4728]: I0204 11:48:01.577851 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a","Type":"ContainerStarted","Data":"1c172b5db4bee0cb73ec63046d7c1ef171af5c2b9741502c9d4b75b21fbbc861"} Feb 04 11:48:01 crc kubenswrapper[4728]: I0204 11:48:01.589075 4728 generic.go:334] "Generic (PLEG): container finished" podID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerID="4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037" exitCode=0 Feb 04 11:48:01 crc kubenswrapper[4728]: I0204 11:48:01.589152 4728 generic.go:334] "Generic (PLEG): container finished" podID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerID="2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74" exitCode=2 Feb 04 11:48:01 crc kubenswrapper[4728]: I0204 11:48:01.589163 4728 generic.go:334] "Generic (PLEG): container finished" podID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerID="94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb" exitCode=0 Feb 04 11:48:01 crc kubenswrapper[4728]: I0204 11:48:01.589179 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c83fa078-fb92-48af-89bf-eacec73c21ec","Type":"ContainerDied","Data":"4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037"} Feb 04 11:48:01 crc kubenswrapper[4728]: I0204 11:48:01.589248 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c83fa078-fb92-48af-89bf-eacec73c21ec","Type":"ContainerDied","Data":"2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74"} Feb 04 11:48:01 crc kubenswrapper[4728]: I0204 11:48:01.589262 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c83fa078-fb92-48af-89bf-eacec73c21ec","Type":"ContainerDied","Data":"94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb"} Feb 04 11:48:01 crc kubenswrapper[4728]: I0204 11:48:01.612304 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.612279761 podStartE2EDuration="3.612279761s" podCreationTimestamp="2026-02-04 11:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:01.602133942 +0000 UTC m=+1230.744838337" watchObservedRunningTime="2026-02-04 11:48:01.612279761 +0000 UTC m=+1230.754984166" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.418244 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.573665 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-log-httpd\") pod \"c83fa078-fb92-48af-89bf-eacec73c21ec\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.573714 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-scripts\") pod \"c83fa078-fb92-48af-89bf-eacec73c21ec\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.573797 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-config-data\") pod \"c83fa078-fb92-48af-89bf-eacec73c21ec\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.573874 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-sg-core-conf-yaml\") pod \"c83fa078-fb92-48af-89bf-eacec73c21ec\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.573933 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksg7z\" (UniqueName: \"kubernetes.io/projected/c83fa078-fb92-48af-89bf-eacec73c21ec-kube-api-access-ksg7z\") pod \"c83fa078-fb92-48af-89bf-eacec73c21ec\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.573971 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-run-httpd\") pod \"c83fa078-fb92-48af-89bf-eacec73c21ec\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.574059 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-combined-ca-bundle\") pod \"c83fa078-fb92-48af-89bf-eacec73c21ec\" (UID: \"c83fa078-fb92-48af-89bf-eacec73c21ec\") " Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.574356 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c83fa078-fb92-48af-89bf-eacec73c21ec" (UID: "c83fa078-fb92-48af-89bf-eacec73c21ec"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.574641 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c83fa078-fb92-48af-89bf-eacec73c21ec" (UID: "c83fa078-fb92-48af-89bf-eacec73c21ec"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.580038 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c83fa078-fb92-48af-89bf-eacec73c21ec-kube-api-access-ksg7z" (OuterVolumeSpecName: "kube-api-access-ksg7z") pod "c83fa078-fb92-48af-89bf-eacec73c21ec" (UID: "c83fa078-fb92-48af-89bf-eacec73c21ec"). InnerVolumeSpecName "kube-api-access-ksg7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.609969 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-scripts" (OuterVolumeSpecName: "scripts") pod "c83fa078-fb92-48af-89bf-eacec73c21ec" (UID: "c83fa078-fb92-48af-89bf-eacec73c21ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.614333 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c83fa078-fb92-48af-89bf-eacec73c21ec" (UID: "c83fa078-fb92-48af-89bf-eacec73c21ec"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.624222 4728 generic.go:334] "Generic (PLEG): container finished" podID="b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71" containerID="c65cd0dfe0e3f33d8c1c82ddaba005e76585e312197cc1e78c077290dde8e154" exitCode=0 Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.624296 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j4npp" event={"ID":"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71","Type":"ContainerDied","Data":"c65cd0dfe0e3f33d8c1c82ddaba005e76585e312197cc1e78c077290dde8e154"} Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.627616 4728 generic.go:334] "Generic (PLEG): container finished" podID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerID="1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907" exitCode=0 Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.627657 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c83fa078-fb92-48af-89bf-eacec73c21ec","Type":"ContainerDied","Data":"1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907"} Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.627684 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c83fa078-fb92-48af-89bf-eacec73c21ec","Type":"ContainerDied","Data":"0cbc701af109ee248de0a42bd6abd9a2dab8056429809bd02f322c65e7eccb65"} Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.627701 4728 scope.go:117] "RemoveContainer" containerID="4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.627866 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.660366 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c83fa078-fb92-48af-89bf-eacec73c21ec" (UID: "c83fa078-fb92-48af-89bf-eacec73c21ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.676527 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.676571 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksg7z\" (UniqueName: \"kubernetes.io/projected/c83fa078-fb92-48af-89bf-eacec73c21ec-kube-api-access-ksg7z\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.676585 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.676597 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.676607 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c83fa078-fb92-48af-89bf-eacec73c21ec-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.676617 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.683321 4728 scope.go:117] "RemoveContainer" containerID="2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.686700 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-config-data" (OuterVolumeSpecName: "config-data") pod "c83fa078-fb92-48af-89bf-eacec73c21ec" (UID: "c83fa078-fb92-48af-89bf-eacec73c21ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.702018 4728 scope.go:117] "RemoveContainer" containerID="94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.725161 4728 scope.go:117] "RemoveContainer" containerID="1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.759555 4728 scope.go:117] "RemoveContainer" containerID="4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037" Feb 04 11:48:05 crc kubenswrapper[4728]: E0204 11:48:05.760939 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037\": container with ID starting with 4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037 not found: ID does not exist" containerID="4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.760979 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037"} err="failed to get container status \"4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037\": rpc error: code = NotFound desc = could not find container \"4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037\": container with ID starting with 4c730a8712a0e645b63147bfc503470fe6fa4cfeec1e527cf5bb4583c4cbf037 not found: ID does not exist" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.761000 4728 scope.go:117] "RemoveContainer" containerID="2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74" Feb 04 11:48:05 crc kubenswrapper[4728]: E0204 11:48:05.761321 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74\": container with ID starting with 2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74 not found: ID does not exist" containerID="2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.761345 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74"} err="failed to get container status \"2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74\": rpc error: code = NotFound desc = could not find container \"2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74\": container with ID starting with 2dbf491805e22665aa6eacf8f3fd3f99ac57a35407ec1e98328958dae365da74 not found: ID does not exist" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.761362 4728 scope.go:117] "RemoveContainer" containerID="94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb" Feb 04 11:48:05 crc kubenswrapper[4728]: E0204 11:48:05.762218 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb\": container with ID starting with 94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb not found: ID does not exist" containerID="94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.762242 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb"} err="failed to get container status \"94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb\": rpc error: code = NotFound desc = could not find container \"94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb\": container with ID starting with 94965e66878184cb3c575add3777dfa9cc1fbe8ec0765f509600b0872f0cc8bb not found: ID does not exist" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.762256 4728 scope.go:117] "RemoveContainer" containerID="1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907" Feb 04 11:48:05 crc kubenswrapper[4728]: E0204 11:48:05.762537 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907\": container with ID starting with 1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907 not found: ID does not exist" containerID="1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.762569 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907"} err="failed to get container status \"1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907\": rpc error: code = NotFound desc = could not find container \"1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907\": container with ID starting with 1d10de080db23fef993deec4fea2e9a42f6f19e145d28b8fa48e02c7b30b8907 not found: ID does not exist" Feb 04 11:48:05 crc kubenswrapper[4728]: I0204 11:48:05.778652 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83fa078-fb92-48af-89bf-eacec73c21ec-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.043471 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.056698 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.070871 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:06 crc kubenswrapper[4728]: E0204 11:48:06.071338 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6" containerName="heat-engine" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.071365 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6" containerName="heat-engine" Feb 04 11:48:06 crc kubenswrapper[4728]: E0204 11:48:06.071409 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="proxy-httpd" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.071425 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="proxy-httpd" Feb 04 11:48:06 crc kubenswrapper[4728]: E0204 11:48:06.071442 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="ceilometer-notification-agent" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.071450 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="ceilometer-notification-agent" Feb 04 11:48:06 crc kubenswrapper[4728]: E0204 11:48:06.071469 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="sg-core" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.071477 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="sg-core" Feb 04 11:48:06 crc kubenswrapper[4728]: E0204 11:48:06.071500 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="ceilometer-central-agent" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.071507 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="ceilometer-central-agent" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.071717 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="proxy-httpd" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.071734 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="ceilometer-central-agent" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.071803 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="sg-core" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.071819 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd78fa4-d1bb-44b2-b7a5-08eb43c93cf6" containerName="heat-engine" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.071835 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" containerName="ceilometer-notification-agent" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.081673 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.085155 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.086296 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-scripts\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.086364 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6494\" (UniqueName: \"kubernetes.io/projected/bd9c30f2-6808-4128-9d29-393f02d854ea-kube-api-access-x6494\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.086423 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.086482 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-config-data\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.086528 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-log-httpd\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.086603 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-run-httpd\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.086633 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.090118 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.121028 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.191256 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-scripts\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.191328 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6494\" (UniqueName: \"kubernetes.io/projected/bd9c30f2-6808-4128-9d29-393f02d854ea-kube-api-access-x6494\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.191376 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.191419 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-config-data\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.191456 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-log-httpd\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.191972 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-log-httpd\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.192088 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-run-httpd\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.192117 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.192493 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-run-httpd\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.195863 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-scripts\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.195902 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.197049 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-config-data\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.197961 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.218176 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6494\" (UniqueName: \"kubernetes.io/projected/bd9c30f2-6808-4128-9d29-393f02d854ea-kube-api-access-x6494\") pod \"ceilometer-0\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.414609 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.895145 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:06 crc kubenswrapper[4728]: I0204 11:48:06.909183 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.106805 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-scripts\") pod \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.106858 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-combined-ca-bundle\") pod \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.106884 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-config-data\") pod \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.107604 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cf7v\" (UniqueName: \"kubernetes.io/projected/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-kube-api-access-7cf7v\") pod \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\" (UID: \"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71\") " Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.113965 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-scripts" (OuterVolumeSpecName: "scripts") pod "b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71" (UID: "b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.114130 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-kube-api-access-7cf7v" (OuterVolumeSpecName: "kube-api-access-7cf7v") pod "b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71" (UID: "b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71"). InnerVolumeSpecName "kube-api-access-7cf7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.131924 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-config-data" (OuterVolumeSpecName: "config-data") pod "b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71" (UID: "b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.138078 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71" (UID: "b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.209400 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cf7v\" (UniqueName: \"kubernetes.io/projected/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-kube-api-access-7cf7v\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.209435 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.209446 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.209454 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.564319 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83fa078-fb92-48af-89bf-eacec73c21ec" path="/var/lib/kubelet/pods/c83fa078-fb92-48af-89bf-eacec73c21ec/volumes" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.661298 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd9c30f2-6808-4128-9d29-393f02d854ea","Type":"ContainerStarted","Data":"be52c0d173517eab8eee4084e8fe52db151855fc0876e77d4f4e463e41e91a60"} Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.674039 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j4npp" event={"ID":"b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71","Type":"ContainerDied","Data":"60aceed8e105d9fdc17394fe6ca8c97fecf860b41ebfc1ef9909839e87f0f09e"} Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.674084 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60aceed8e105d9fdc17394fe6ca8c97fecf860b41ebfc1ef9909839e87f0f09e" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.674112 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j4npp" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.737813 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.754460 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 11:48:07 crc kubenswrapper[4728]: E0204 11:48:07.754941 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71" containerName="nova-cell0-conductor-db-sync" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.754962 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71" containerName="nova-cell0-conductor-db-sync" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.755162 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71" containerName="nova-cell0-conductor-db-sync" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.755927 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.759663 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4dtz9" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.760201 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.784862 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.823486 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhbq\" (UniqueName: \"kubernetes.io/projected/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-kube-api-access-7lhbq\") pod \"nova-cell0-conductor-0\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.823594 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.823633 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.924613 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.925010 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.925102 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhbq\" (UniqueName: \"kubernetes.io/projected/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-kube-api-access-7lhbq\") pod \"nova-cell0-conductor-0\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.929684 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.933467 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.942035 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhbq\" (UniqueName: \"kubernetes.io/projected/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-kube-api-access-7lhbq\") pod \"nova-cell0-conductor-0\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.957887 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 04 11:48:07 crc kubenswrapper[4728]: I0204 11:48:07.958014 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 04 11:48:08 crc kubenswrapper[4728]: I0204 11:48:08.000802 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 04 11:48:08 crc kubenswrapper[4728]: I0204 11:48:08.012271 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 04 11:48:08 crc kubenswrapper[4728]: I0204 11:48:08.075170 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:08 crc kubenswrapper[4728]: I0204 11:48:08.559134 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 11:48:08 crc kubenswrapper[4728]: W0204 11:48:08.562321 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5444a70_d5ae_49ec_aa43_b74a2fcdad54.slice/crio-7fa96a804a02afb1f5dfdf37362d8257911455a500f26179019b576ae036c322 WatchSource:0}: Error finding container 7fa96a804a02afb1f5dfdf37362d8257911455a500f26179019b576ae036c322: Status 404 returned error can't find the container with id 7fa96a804a02afb1f5dfdf37362d8257911455a500f26179019b576ae036c322 Feb 04 11:48:08 crc kubenswrapper[4728]: I0204 11:48:08.735297 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d5444a70-d5ae-49ec-aa43-b74a2fcdad54","Type":"ContainerStarted","Data":"7fa96a804a02afb1f5dfdf37362d8257911455a500f26179019b576ae036c322"} Feb 04 11:48:08 crc kubenswrapper[4728]: I0204 11:48:08.736619 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd9c30f2-6808-4128-9d29-393f02d854ea","Type":"ContainerStarted","Data":"c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a"} Feb 04 11:48:08 crc kubenswrapper[4728]: I0204 11:48:08.736888 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 04 11:48:08 crc kubenswrapper[4728]: I0204 11:48:08.736912 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 04 11:48:09 crc kubenswrapper[4728]: I0204 11:48:09.110750 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 04 11:48:09 crc kubenswrapper[4728]: I0204 11:48:09.110820 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 04 11:48:09 crc kubenswrapper[4728]: I0204 11:48:09.148426 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 04 11:48:09 crc kubenswrapper[4728]: I0204 11:48:09.164307 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 04 11:48:09 crc kubenswrapper[4728]: I0204 11:48:09.747871 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d5444a70-d5ae-49ec-aa43-b74a2fcdad54","Type":"ContainerStarted","Data":"2536647e1223363b5805a632b7ae91f392b1921bc5932fcfbec4c98f8c8b515c"} Feb 04 11:48:09 crc kubenswrapper[4728]: I0204 11:48:09.750985 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 04 11:48:09 crc kubenswrapper[4728]: I0204 11:48:09.751004 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 04 11:48:09 crc kubenswrapper[4728]: I0204 11:48:09.768544 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.768524343 podStartE2EDuration="2.768524343s" podCreationTimestamp="2026-02-04 11:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:09.768040132 +0000 UTC m=+1238.910744537" watchObservedRunningTime="2026-02-04 11:48:09.768524343 +0000 UTC m=+1238.911228728" Feb 04 11:48:10 crc kubenswrapper[4728]: I0204 11:48:10.761202 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd9c30f2-6808-4128-9d29-393f02d854ea","Type":"ContainerStarted","Data":"9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c"} Feb 04 11:48:10 crc kubenswrapper[4728]: I0204 11:48:10.761649 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd9c30f2-6808-4128-9d29-393f02d854ea","Type":"ContainerStarted","Data":"87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54"} Feb 04 11:48:10 crc kubenswrapper[4728]: I0204 11:48:10.762222 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:11 crc kubenswrapper[4728]: I0204 11:48:11.037346 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 04 11:48:11 crc kubenswrapper[4728]: I0204 11:48:11.037510 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 11:48:11 crc kubenswrapper[4728]: I0204 11:48:11.103377 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 04 11:48:11 crc kubenswrapper[4728]: I0204 11:48:11.768093 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 11:48:11 crc kubenswrapper[4728]: I0204 11:48:11.768896 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 04 11:48:11 crc kubenswrapper[4728]: I0204 11:48:11.819428 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 04 11:48:11 crc kubenswrapper[4728]: I0204 11:48:11.820613 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.119442 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.756963 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hssm4"] Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.758712 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.762100 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.762117 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.780141 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hssm4"] Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.820633 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="ceilometer-central-agent" containerID="cri-o://c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a" gracePeriod=30 Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.820942 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd9c30f2-6808-4128-9d29-393f02d854ea","Type":"ContainerStarted","Data":"4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78"} Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.820996 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="proxy-httpd" containerID="cri-o://4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78" gracePeriod=30 Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.821031 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.821065 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="sg-core" containerID="cri-o://9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c" gracePeriod=30 Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.821109 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="ceilometer-notification-agent" containerID="cri-o://87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54" gracePeriod=30 Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.847931 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhcrs\" (UniqueName: \"kubernetes.io/projected/fed81957-c76f-4a31-837d-947294fe38a4-kube-api-access-fhcrs\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.847984 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.848098 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-scripts\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.848185 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-config-data\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.852527 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.019537456 podStartE2EDuration="7.852506033s" podCreationTimestamp="2026-02-04 11:48:06 +0000 UTC" firstStartedPulling="2026-02-04 11:48:06.902001696 +0000 UTC m=+1236.044706081" lastFinishedPulling="2026-02-04 11:48:12.734970273 +0000 UTC m=+1241.877674658" observedRunningTime="2026-02-04 11:48:13.84095654 +0000 UTC m=+1242.983660925" watchObservedRunningTime="2026-02-04 11:48:13.852506033 +0000 UTC m=+1242.995210418" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.922671 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.924197 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.939965 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.941509 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.946177 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.946435 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.949975 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-scripts\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.950078 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-config-data\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.950215 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcrs\" (UniqueName: \"kubernetes.io/projected/fed81957-c76f-4a31-837d-947294fe38a4-kube-api-access-fhcrs\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.950251 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.956647 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-scripts\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.960985 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.961070 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-config-data\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.967348 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.985196 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:48:13 crc kubenswrapper[4728]: I0204 11:48:13.990308 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhcrs\" (UniqueName: \"kubernetes.io/projected/fed81957-c76f-4a31-837d-947294fe38a4-kube-api-access-fhcrs\") pod \"nova-cell0-cell-mapping-hssm4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.085521 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.145235 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.146648 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.149485 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.171778 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.171992 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-config-data\") pod \"nova-scheduler-0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.172089 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.172132 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhkvh\" (UniqueName: \"kubernetes.io/projected/5211fb35-aec6-47a3-b309-ec02052d52c0-kube-api-access-xhkvh\") pod \"nova-scheduler-0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.172257 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bdsv\" (UniqueName: \"kubernetes.io/projected/0a5e2fc0-c280-467d-b5e4-793d459cbbab-kube-api-access-5bdsv\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.172287 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-config-data\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.172333 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.172446 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a5e2fc0-c280-467d-b5e4-793d459cbbab-logs\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.231414 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.243797 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.252977 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.276056 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.276135 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhkvh\" (UniqueName: \"kubernetes.io/projected/5211fb35-aec6-47a3-b309-ec02052d52c0-kube-api-access-xhkvh\") pod \"nova-scheduler-0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.277427 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pt5f\" (UniqueName: \"kubernetes.io/projected/9a4fd92d-0009-486d-97e1-d086721c5336-kube-api-access-2pt5f\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.277508 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-config-data\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.277574 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bdsv\" (UniqueName: \"kubernetes.io/projected/0a5e2fc0-c280-467d-b5e4-793d459cbbab-kube-api-access-5bdsv\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.277622 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-config-data\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.277698 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.277841 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a4fd92d-0009-486d-97e1-d086721c5336-logs\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.277987 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.278051 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a5e2fc0-c280-467d-b5e4-793d459cbbab-logs\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.278079 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-config-data\") pod \"nova-scheduler-0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.278625 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a5e2fc0-c280-467d-b5e4-793d459cbbab-logs\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.283036 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.284120 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-config-data\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.290002 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.303285 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.304918 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-config-data\") pod \"nova-scheduler-0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.315104 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bdsv\" (UniqueName: \"kubernetes.io/projected/0a5e2fc0-c280-467d-b5e4-793d459cbbab-kube-api-access-5bdsv\") pod \"nova-api-0\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.315187 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-r2b52"] Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.319047 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.325003 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhkvh\" (UniqueName: \"kubernetes.io/projected/5211fb35-aec6-47a3-b309-ec02052d52c0-kube-api-access-xhkvh\") pod \"nova-scheduler-0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.355142 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-r2b52"] Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.359649 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.380106 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.380169 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.380239 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfv8g\" (UniqueName: \"kubernetes.io/projected/579a7c27-d276-4066-b865-2f621e74410d-kube-api-access-kfv8g\") pod \"nova-cell1-novncproxy-0\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.380296 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.380321 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pt5f\" (UniqueName: \"kubernetes.io/projected/9a4fd92d-0009-486d-97e1-d086721c5336-kube-api-access-2pt5f\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.380379 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-config-data\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.380470 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a4fd92d-0009-486d-97e1-d086721c5336-logs\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.384024 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.384667 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.396504 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-config-data\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.402014 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pt5f\" (UniqueName: \"kubernetes.io/projected/9a4fd92d-0009-486d-97e1-d086721c5336-kube-api-access-2pt5f\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.403263 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a4fd92d-0009-486d-97e1-d086721c5336-logs\") pod \"nova-metadata-0\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.483547 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.483600 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhfxf\" (UniqueName: \"kubernetes.io/projected/2dc28b54-9e03-4356-9586-198cbebe01bc-kube-api-access-mhfxf\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.485027 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.485068 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfv8g\" (UniqueName: \"kubernetes.io/projected/579a7c27-d276-4066-b865-2f621e74410d-kube-api-access-kfv8g\") pod \"nova-cell1-novncproxy-0\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.485178 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.485251 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.485328 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-svc\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.485497 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-config\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.485531 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.496231 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.502428 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.510551 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfv8g\" (UniqueName: \"kubernetes.io/projected/579a7c27-d276-4066-b865-2f621e74410d-kube-api-access-kfv8g\") pod \"nova-cell1-novncproxy-0\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.566579 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.583226 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.586602 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-config\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.586658 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.586680 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhfxf\" (UniqueName: \"kubernetes.io/projected/2dc28b54-9e03-4356-9586-198cbebe01bc-kube-api-access-mhfxf\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.586712 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.586801 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.586834 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-svc\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.587631 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-svc\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.588135 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-config\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.588660 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.590297 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.595027 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.614788 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhfxf\" (UniqueName: \"kubernetes.io/projected/2dc28b54-9e03-4356-9586-198cbebe01bc-kube-api-access-mhfxf\") pod \"dnsmasq-dns-9b86998b5-r2b52\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.716812 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.761282 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hssm4"] Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.855040 4728 generic.go:334] "Generic (PLEG): container finished" podID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerID="4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78" exitCode=0 Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.855353 4728 generic.go:334] "Generic (PLEG): container finished" podID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerID="9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c" exitCode=2 Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.855364 4728 generic.go:334] "Generic (PLEG): container finished" podID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerID="87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54" exitCode=0 Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.856522 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd9c30f2-6808-4128-9d29-393f02d854ea","Type":"ContainerDied","Data":"4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78"} Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.856566 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd9c30f2-6808-4128-9d29-393f02d854ea","Type":"ContainerDied","Data":"9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c"} Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.856614 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd9c30f2-6808-4128-9d29-393f02d854ea","Type":"ContainerDied","Data":"87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54"} Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.860197 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hssm4" event={"ID":"fed81957-c76f-4a31-837d-947294fe38a4","Type":"ContainerStarted","Data":"a2807799026ffaeb9cc49f7b20b8b6bf7a1683a869f04c224c3a02671ce55d91"} Feb 04 11:48:14 crc kubenswrapper[4728]: I0204 11:48:14.938252 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.159232 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.265205 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.274744 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.428509 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-r2b52"] Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.467047 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z8hwn"] Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.468611 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.473050 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.473136 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.478283 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z8hwn"] Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.614273 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-config-data\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.614791 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7mh4\" (UniqueName: \"kubernetes.io/projected/f9187093-4720-491c-b6a0-8a7bdafab687-kube-api-access-q7mh4\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.615615 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.615687 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-scripts\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.718289 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-config-data\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.718429 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7mh4\" (UniqueName: \"kubernetes.io/projected/f9187093-4720-491c-b6a0-8a7bdafab687-kube-api-access-q7mh4\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.718467 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.718498 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-scripts\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.723615 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-scripts\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.729282 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-config-data\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.734460 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.738087 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7mh4\" (UniqueName: \"kubernetes.io/projected/f9187093-4720-491c-b6a0-8a7bdafab687-kube-api-access-q7mh4\") pod \"nova-cell1-conductor-db-sync-z8hwn\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.801475 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.869466 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a4fd92d-0009-486d-97e1-d086721c5336","Type":"ContainerStarted","Data":"333b57be2e8c31d31c84e3b2acc2ba8bf125621f78b3de5cc31e23239a95be58"} Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.870488 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5211fb35-aec6-47a3-b309-ec02052d52c0","Type":"ContainerStarted","Data":"6de6408cabbf658fec25b341fb4f97bbcab06cd157bd328ecb0581922c7f1352"} Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.874575 4728 generic.go:334] "Generic (PLEG): container finished" podID="2dc28b54-9e03-4356-9586-198cbebe01bc" containerID="b86268580ce94c2fc44a1665453d757a90aa363e09745db1af521b730f1dfb55" exitCode=0 Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.874669 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" event={"ID":"2dc28b54-9e03-4356-9586-198cbebe01bc","Type":"ContainerDied","Data":"b86268580ce94c2fc44a1665453d757a90aa363e09745db1af521b730f1dfb55"} Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.874692 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" event={"ID":"2dc28b54-9e03-4356-9586-198cbebe01bc","Type":"ContainerStarted","Data":"a085fbbecdb504e04320f80d5eb472c0ec63726e43bf89cc3fb10e4cf118d26a"} Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.877181 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"579a7c27-d276-4066-b865-2f621e74410d","Type":"ContainerStarted","Data":"071f6e2416213c4b6980a3bdd13eafe8272be5a8ca5b8cabb541a7d2d661fefa"} Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.884249 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a5e2fc0-c280-467d-b5e4-793d459cbbab","Type":"ContainerStarted","Data":"742a4a0b32f9d2bbe533e4ea0ccfdeba8352af37db54c0eb0772d9b0473fed29"} Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.886921 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hssm4" event={"ID":"fed81957-c76f-4a31-837d-947294fe38a4","Type":"ContainerStarted","Data":"7de2cdd8aa4f383e8d1b12ce3e65304c53384060beca7b2935a4581e53b61461"} Feb 04 11:48:15 crc kubenswrapper[4728]: I0204 11:48:15.928626 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hssm4" podStartSLOduration=2.928608454 podStartE2EDuration="2.928608454s" podCreationTimestamp="2026-02-04 11:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:15.926442962 +0000 UTC m=+1245.069147347" watchObservedRunningTime="2026-02-04 11:48:15.928608454 +0000 UTC m=+1245.071312839" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.336362 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z8hwn"] Feb 04 11:48:16 crc kubenswrapper[4728]: W0204 11:48:16.359843 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9187093_4720_491c_b6a0_8a7bdafab687.slice/crio-e1eef790bc52d1d0179001fa919e39d842a81cf4e831d0059d652d12916e55e7 WatchSource:0}: Error finding container e1eef790bc52d1d0179001fa919e39d842a81cf4e831d0059d652d12916e55e7: Status 404 returned error can't find the container with id e1eef790bc52d1d0179001fa919e39d842a81cf4e831d0059d652d12916e55e7 Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.680324 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.742921 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-run-httpd\") pod \"bd9c30f2-6808-4128-9d29-393f02d854ea\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.742995 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-config-data\") pod \"bd9c30f2-6808-4128-9d29-393f02d854ea\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.743083 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-sg-core-conf-yaml\") pod \"bd9c30f2-6808-4128-9d29-393f02d854ea\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.743135 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-log-httpd\") pod \"bd9c30f2-6808-4128-9d29-393f02d854ea\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.743158 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-scripts\") pod \"bd9c30f2-6808-4128-9d29-393f02d854ea\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.743268 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6494\" (UniqueName: \"kubernetes.io/projected/bd9c30f2-6808-4128-9d29-393f02d854ea-kube-api-access-x6494\") pod \"bd9c30f2-6808-4128-9d29-393f02d854ea\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.743299 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-combined-ca-bundle\") pod \"bd9c30f2-6808-4128-9d29-393f02d854ea\" (UID: \"bd9c30f2-6808-4128-9d29-393f02d854ea\") " Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.743836 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bd9c30f2-6808-4128-9d29-393f02d854ea" (UID: "bd9c30f2-6808-4128-9d29-393f02d854ea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.744581 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bd9c30f2-6808-4128-9d29-393f02d854ea" (UID: "bd9c30f2-6808-4128-9d29-393f02d854ea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.749452 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-scripts" (OuterVolumeSpecName: "scripts") pod "bd9c30f2-6808-4128-9d29-393f02d854ea" (UID: "bd9c30f2-6808-4128-9d29-393f02d854ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.753043 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd9c30f2-6808-4128-9d29-393f02d854ea-kube-api-access-x6494" (OuterVolumeSpecName: "kube-api-access-x6494") pod "bd9c30f2-6808-4128-9d29-393f02d854ea" (UID: "bd9c30f2-6808-4128-9d29-393f02d854ea"). InnerVolumeSpecName "kube-api-access-x6494". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.777218 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bd9c30f2-6808-4128-9d29-393f02d854ea" (UID: "bd9c30f2-6808-4128-9d29-393f02d854ea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.846986 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.847020 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.847032 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6494\" (UniqueName: \"kubernetes.io/projected/bd9c30f2-6808-4128-9d29-393f02d854ea-kube-api-access-x6494\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.847042 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9c30f2-6808-4128-9d29-393f02d854ea-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.847053 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.869864 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd9c30f2-6808-4128-9d29-393f02d854ea" (UID: "bd9c30f2-6808-4128-9d29-393f02d854ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.883344 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-config-data" (OuterVolumeSpecName: "config-data") pod "bd9c30f2-6808-4128-9d29-393f02d854ea" (UID: "bd9c30f2-6808-4128-9d29-393f02d854ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.917457 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z8hwn" event={"ID":"f9187093-4720-491c-b6a0-8a7bdafab687","Type":"ContainerStarted","Data":"7087b35f7c7b90f96088a2ba96a9769cf267eddd4cb3a1c1b2df04c3649b12b1"} Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.917500 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z8hwn" event={"ID":"f9187093-4720-491c-b6a0-8a7bdafab687","Type":"ContainerStarted","Data":"e1eef790bc52d1d0179001fa919e39d842a81cf4e831d0059d652d12916e55e7"} Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.930723 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" event={"ID":"2dc28b54-9e03-4356-9586-198cbebe01bc","Type":"ContainerStarted","Data":"088d238541b85a306b36fef383dc8808b5ad9d1a0ebb5a4d8b3a623ff9ddcc90"} Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.930971 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.947339 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-z8hwn" podStartSLOduration=1.947313667 podStartE2EDuration="1.947313667s" podCreationTimestamp="2026-02-04 11:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:16.946628561 +0000 UTC m=+1246.089332956" watchObservedRunningTime="2026-02-04 11:48:16.947313667 +0000 UTC m=+1246.090018062" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.956150 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.956194 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9c30f2-6808-4128-9d29-393f02d854ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.961483 4728 generic.go:334] "Generic (PLEG): container finished" podID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerID="c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a" exitCode=0 Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.962249 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.962536 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd9c30f2-6808-4128-9d29-393f02d854ea","Type":"ContainerDied","Data":"c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a"} Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.962572 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd9c30f2-6808-4128-9d29-393f02d854ea","Type":"ContainerDied","Data":"be52c0d173517eab8eee4084e8fe52db151855fc0876e77d4f4e463e41e91a60"} Feb 04 11:48:16 crc kubenswrapper[4728]: I0204 11:48:16.962608 4728 scope.go:117] "RemoveContainer" containerID="4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.021489 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" podStartSLOduration=3.02146886 podStartE2EDuration="3.02146886s" podCreationTimestamp="2026-02-04 11:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:16.968919808 +0000 UTC m=+1246.111624193" watchObservedRunningTime="2026-02-04 11:48:17.02146886 +0000 UTC m=+1246.164173245" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.027608 4728 scope.go:117] "RemoveContainer" containerID="9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.040381 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.063810 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.076015 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:17 crc kubenswrapper[4728]: E0204 11:48:17.076420 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="proxy-httpd" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.076442 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="proxy-httpd" Feb 04 11:48:17 crc kubenswrapper[4728]: E0204 11:48:17.076458 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="ceilometer-central-agent" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.076465 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="ceilometer-central-agent" Feb 04 11:48:17 crc kubenswrapper[4728]: E0204 11:48:17.076498 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="sg-core" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.076504 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="sg-core" Feb 04 11:48:17 crc kubenswrapper[4728]: E0204 11:48:17.076512 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="ceilometer-notification-agent" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.076518 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="ceilometer-notification-agent" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.076667 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="ceilometer-notification-agent" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.076686 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="ceilometer-central-agent" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.076705 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="proxy-httpd" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.076714 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" containerName="sg-core" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.080130 4728 scope.go:117] "RemoveContainer" containerID="87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.080955 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.084273 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.084722 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.089812 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.155045 4728 scope.go:117] "RemoveContainer" containerID="c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.161430 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzc9c\" (UniqueName: \"kubernetes.io/projected/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-kube-api-access-xzc9c\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.161492 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.161573 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-run-httpd\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.161593 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-config-data\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.161659 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.161886 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-log-httpd\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.162136 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-scripts\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.185476 4728 scope.go:117] "RemoveContainer" containerID="4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78" Feb 04 11:48:17 crc kubenswrapper[4728]: E0204 11:48:17.186357 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78\": container with ID starting with 4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78 not found: ID does not exist" containerID="4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.186385 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78"} err="failed to get container status \"4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78\": rpc error: code = NotFound desc = could not find container \"4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78\": container with ID starting with 4d0de20a0e31707daa494ea2fff2e62f886b5452211d05c8dbeb837b83cebe78 not found: ID does not exist" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.186407 4728 scope.go:117] "RemoveContainer" containerID="9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c" Feb 04 11:48:17 crc kubenswrapper[4728]: E0204 11:48:17.187204 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c\": container with ID starting with 9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c not found: ID does not exist" containerID="9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.187226 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c"} err="failed to get container status \"9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c\": rpc error: code = NotFound desc = could not find container \"9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c\": container with ID starting with 9c4fc7904ed8dd490839e7c3c779b14c6821649d6e73a76e30ed1b1697557d2c not found: ID does not exist" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.187259 4728 scope.go:117] "RemoveContainer" containerID="87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54" Feb 04 11:48:17 crc kubenswrapper[4728]: E0204 11:48:17.191373 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54\": container with ID starting with 87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54 not found: ID does not exist" containerID="87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.191432 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54"} err="failed to get container status \"87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54\": rpc error: code = NotFound desc = could not find container \"87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54\": container with ID starting with 87416d89290fe15bd2346b8b892cd10586d4871780b79a14380424919c950d54 not found: ID does not exist" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.191451 4728 scope.go:117] "RemoveContainer" containerID="c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a" Feb 04 11:48:17 crc kubenswrapper[4728]: E0204 11:48:17.191878 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a\": container with ID starting with c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a not found: ID does not exist" containerID="c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.191923 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a"} err="failed to get container status \"c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a\": rpc error: code = NotFound desc = could not find container \"c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a\": container with ID starting with c69fee4893ae98a4f64c850aa8e056b7d759fc1568cbf58c4371190520bf9f5a not found: ID does not exist" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.263665 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-log-httpd\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.263764 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-scripts\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.263794 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzc9c\" (UniqueName: \"kubernetes.io/projected/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-kube-api-access-xzc9c\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.263813 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.263858 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-run-httpd\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.263875 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-config-data\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.263916 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.265137 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-run-httpd\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.265667 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-log-httpd\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.280368 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-scripts\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.280384 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.281564 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.282772 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-config-data\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.282832 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzc9c\" (UniqueName: \"kubernetes.io/projected/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-kube-api-access-xzc9c\") pod \"ceilometer-0\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.401837 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.601705 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd9c30f2-6808-4128-9d29-393f02d854ea" path="/var/lib/kubelet/pods/bd9c30f2-6808-4128-9d29-393f02d854ea/volumes" Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.639831 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 11:48:17 crc kubenswrapper[4728]: I0204 11:48:17.647611 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:18 crc kubenswrapper[4728]: I0204 11:48:18.035923 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:18 crc kubenswrapper[4728]: I0204 11:48:18.292122 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:48:18 crc kubenswrapper[4728]: I0204 11:48:18.306450 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 11:48:18 crc kubenswrapper[4728]: I0204 11:48:18.306876 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="d5444a70-d5ae-49ec-aa43-b74a2fcdad54" containerName="nova-cell0-conductor-conductor" containerID="cri-o://2536647e1223363b5805a632b7ae91f392b1921bc5932fcfbec4c98f8c8b515c" gracePeriod=30 Feb 04 11:48:18 crc kubenswrapper[4728]: I0204 11:48:18.317974 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:19 crc kubenswrapper[4728]: I0204 11:48:19.925874 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:20 crc kubenswrapper[4728]: W0204 11:48:20.249533 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d415ba3_d5c9_4757_a8c1_dddc5c8903ee.slice/crio-8ff824792a978874f207fc336c0e50f93e093288266860248132b23d741194ed WatchSource:0}: Error finding container 8ff824792a978874f207fc336c0e50f93e093288266860248132b23d741194ed: Status 404 returned error can't find the container with id 8ff824792a978874f207fc336c0e50f93e093288266860248132b23d741194ed Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.008458 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee","Type":"ContainerStarted","Data":"8ff824792a978874f207fc336c0e50f93e093288266860248132b23d741194ed"} Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.015678 4728 generic.go:334] "Generic (PLEG): container finished" podID="d5444a70-d5ae-49ec-aa43-b74a2fcdad54" containerID="2536647e1223363b5805a632b7ae91f392b1921bc5932fcfbec4c98f8c8b515c" exitCode=0 Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.015720 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d5444a70-d5ae-49ec-aa43-b74a2fcdad54","Type":"ContainerDied","Data":"2536647e1223363b5805a632b7ae91f392b1921bc5932fcfbec4c98f8c8b515c"} Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.668986 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.793969 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lhbq\" (UniqueName: \"kubernetes.io/projected/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-kube-api-access-7lhbq\") pod \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.794096 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-config-data\") pod \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.794198 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-combined-ca-bundle\") pod \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\" (UID: \"d5444a70-d5ae-49ec-aa43-b74a2fcdad54\") " Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.803908 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-kube-api-access-7lhbq" (OuterVolumeSpecName: "kube-api-access-7lhbq") pod "d5444a70-d5ae-49ec-aa43-b74a2fcdad54" (UID: "d5444a70-d5ae-49ec-aa43-b74a2fcdad54"). InnerVolumeSpecName "kube-api-access-7lhbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.880794 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5444a70-d5ae-49ec-aa43-b74a2fcdad54" (UID: "d5444a70-d5ae-49ec-aa43-b74a2fcdad54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.883815 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-config-data" (OuterVolumeSpecName: "config-data") pod "d5444a70-d5ae-49ec-aa43-b74a2fcdad54" (UID: "d5444a70-d5ae-49ec-aa43-b74a2fcdad54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.896918 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.896952 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lhbq\" (UniqueName: \"kubernetes.io/projected/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-kube-api-access-7lhbq\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:21 crc kubenswrapper[4728]: I0204 11:48:21.896966 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5444a70-d5ae-49ec-aa43-b74a2fcdad54-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.028221 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a4fd92d-0009-486d-97e1-d086721c5336","Type":"ContainerStarted","Data":"8fe10898df39507557bca5855413225205a1dc482245002f432ee3f81668570f"} Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.030776 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.031541 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d5444a70-d5ae-49ec-aa43-b74a2fcdad54","Type":"ContainerDied","Data":"7fa96a804a02afb1f5dfdf37362d8257911455a500f26179019b576ae036c322"} Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.031613 4728 scope.go:117] "RemoveContainer" containerID="2536647e1223363b5805a632b7ae91f392b1921bc5932fcfbec4c98f8c8b515c" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.037787 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5211fb35-aec6-47a3-b309-ec02052d52c0","Type":"ContainerStarted","Data":"b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2"} Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.038010 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5211fb35-aec6-47a3-b309-ec02052d52c0" containerName="nova-scheduler-scheduler" containerID="cri-o://b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2" gracePeriod=30 Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.044575 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"579a7c27-d276-4066-b865-2f621e74410d","Type":"ContainerStarted","Data":"9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923"} Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.044626 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="579a7c27-d276-4066-b865-2f621e74410d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923" gracePeriod=30 Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.047689 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a5e2fc0-c280-467d-b5e4-793d459cbbab","Type":"ContainerStarted","Data":"e97c5a43429f37a1155d2088e0281c61b72aa2e18e873d903c2bd5cbcf76959d"} Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.063927 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee","Type":"ContainerStarted","Data":"1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6"} Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.073885 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.985001337 podStartE2EDuration="9.073858914s" podCreationTimestamp="2026-02-04 11:48:13 +0000 UTC" firstStartedPulling="2026-02-04 11:48:15.175250254 +0000 UTC m=+1244.317954639" lastFinishedPulling="2026-02-04 11:48:21.264107831 +0000 UTC m=+1250.406812216" observedRunningTime="2026-02-04 11:48:22.053685428 +0000 UTC m=+1251.196389813" watchObservedRunningTime="2026-02-04 11:48:22.073858914 +0000 UTC m=+1251.216563309" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.108963 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.123004 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.137816 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 11:48:22 crc kubenswrapper[4728]: E0204 11:48:22.138353 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5444a70-d5ae-49ec-aa43-b74a2fcdad54" containerName="nova-cell0-conductor-conductor" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.138377 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5444a70-d5ae-49ec-aa43-b74a2fcdad54" containerName="nova-cell0-conductor-conductor" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.138654 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5444a70-d5ae-49ec-aa43-b74a2fcdad54" containerName="nova-cell0-conductor-conductor" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.139465 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.143178 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.126119173 podStartE2EDuration="8.143137801s" podCreationTimestamp="2026-02-04 11:48:14 +0000 UTC" firstStartedPulling="2026-02-04 11:48:15.250605426 +0000 UTC m=+1244.393309811" lastFinishedPulling="2026-02-04 11:48:21.267624054 +0000 UTC m=+1250.410328439" observedRunningTime="2026-02-04 11:48:22.085146731 +0000 UTC m=+1251.227851116" watchObservedRunningTime="2026-02-04 11:48:22.143137801 +0000 UTC m=+1251.285842186" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.144724 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.163529 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.309896 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cfef0c5-481e-49c4-b2e3-f37222f7aa50-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cfef0c5-481e-49c4-b2e3-f37222f7aa50\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.310208 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfef0c5-481e-49c4-b2e3-f37222f7aa50-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cfef0c5-481e-49c4-b2e3-f37222f7aa50\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.310290 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n4mt\" (UniqueName: \"kubernetes.io/projected/5cfef0c5-481e-49c4-b2e3-f37222f7aa50-kube-api-access-5n4mt\") pod \"nova-cell0-conductor-0\" (UID: \"5cfef0c5-481e-49c4-b2e3-f37222f7aa50\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.412307 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cfef0c5-481e-49c4-b2e3-f37222f7aa50-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cfef0c5-481e-49c4-b2e3-f37222f7aa50\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.412353 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfef0c5-481e-49c4-b2e3-f37222f7aa50-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cfef0c5-481e-49c4-b2e3-f37222f7aa50\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.412423 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n4mt\" (UniqueName: \"kubernetes.io/projected/5cfef0c5-481e-49c4-b2e3-f37222f7aa50-kube-api-access-5n4mt\") pod \"nova-cell0-conductor-0\" (UID: \"5cfef0c5-481e-49c4-b2e3-f37222f7aa50\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.416374 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cfef0c5-481e-49c4-b2e3-f37222f7aa50-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cfef0c5-481e-49c4-b2e3-f37222f7aa50\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.430573 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfef0c5-481e-49c4-b2e3-f37222f7aa50-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cfef0c5-481e-49c4-b2e3-f37222f7aa50\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.434296 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n4mt\" (UniqueName: \"kubernetes.io/projected/5cfef0c5-481e-49c4-b2e3-f37222f7aa50-kube-api-access-5n4mt\") pod \"nova-cell0-conductor-0\" (UID: \"5cfef0c5-481e-49c4-b2e3-f37222f7aa50\") " pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.477129 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:22 crc kubenswrapper[4728]: I0204 11:48:22.966892 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 04 11:48:22 crc kubenswrapper[4728]: W0204 11:48:22.969469 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cfef0c5_481e_49c4_b2e3_f37222f7aa50.slice/crio-e0d93c3d37adc0a8c5cac48ddec5d688b9a60b262985200969a068d4671fd1eb WatchSource:0}: Error finding container e0d93c3d37adc0a8c5cac48ddec5d688b9a60b262985200969a068d4671fd1eb: Status 404 returned error can't find the container with id e0d93c3d37adc0a8c5cac48ddec5d688b9a60b262985200969a068d4671fd1eb Feb 04 11:48:23 crc kubenswrapper[4728]: I0204 11:48:23.078228 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a5e2fc0-c280-467d-b5e4-793d459cbbab","Type":"ContainerStarted","Data":"f8f5751cd1eff2c59e7e5a7754efe072b0ffa08b19dc60d330c75fb8446e32a6"} Feb 04 11:48:23 crc kubenswrapper[4728]: I0204 11:48:23.078376 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a5e2fc0-c280-467d-b5e4-793d459cbbab" containerName="nova-api-log" containerID="cri-o://e97c5a43429f37a1155d2088e0281c61b72aa2e18e873d903c2bd5cbcf76959d" gracePeriod=30 Feb 04 11:48:23 crc kubenswrapper[4728]: I0204 11:48:23.078905 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a5e2fc0-c280-467d-b5e4-793d459cbbab" containerName="nova-api-api" containerID="cri-o://f8f5751cd1eff2c59e7e5a7754efe072b0ffa08b19dc60d330c75fb8446e32a6" gracePeriod=30 Feb 04 11:48:23 crc kubenswrapper[4728]: I0204 11:48:23.087847 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee","Type":"ContainerStarted","Data":"81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec"} Feb 04 11:48:23 crc kubenswrapper[4728]: I0204 11:48:23.090330 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a4fd92d-0009-486d-97e1-d086721c5336","Type":"ContainerStarted","Data":"3d230cc56b28ff60ddb6098d5fbe195650aca916a5f890cb868ca203b8632cfb"} Feb 04 11:48:23 crc kubenswrapper[4728]: I0204 11:48:23.090470 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9a4fd92d-0009-486d-97e1-d086721c5336" containerName="nova-metadata-log" containerID="cri-o://8fe10898df39507557bca5855413225205a1dc482245002f432ee3f81668570f" gracePeriod=30 Feb 04 11:48:23 crc kubenswrapper[4728]: I0204 11:48:23.090561 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9a4fd92d-0009-486d-97e1-d086721c5336" containerName="nova-metadata-metadata" containerID="cri-o://3d230cc56b28ff60ddb6098d5fbe195650aca916a5f890cb868ca203b8632cfb" gracePeriod=30 Feb 04 11:48:23 crc kubenswrapper[4728]: I0204 11:48:23.100427 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5cfef0c5-481e-49c4-b2e3-f37222f7aa50","Type":"ContainerStarted","Data":"e0d93c3d37adc0a8c5cac48ddec5d688b9a60b262985200969a068d4671fd1eb"} Feb 04 11:48:23 crc kubenswrapper[4728]: I0204 11:48:23.106160 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.772926243 podStartE2EDuration="10.106141408s" podCreationTimestamp="2026-02-04 11:48:13 +0000 UTC" firstStartedPulling="2026-02-04 11:48:14.949101947 +0000 UTC m=+1244.091806332" lastFinishedPulling="2026-02-04 11:48:21.282317112 +0000 UTC m=+1250.425021497" observedRunningTime="2026-02-04 11:48:23.104282234 +0000 UTC m=+1252.246986619" watchObservedRunningTime="2026-02-04 11:48:23.106141408 +0000 UTC m=+1252.248845793" Feb 04 11:48:23 crc kubenswrapper[4728]: I0204 11:48:23.137557 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.153286378 podStartE2EDuration="9.137538211s" podCreationTimestamp="2026-02-04 11:48:14 +0000 UTC" firstStartedPulling="2026-02-04 11:48:15.278474555 +0000 UTC m=+1244.421178940" lastFinishedPulling="2026-02-04 11:48:21.262726388 +0000 UTC m=+1250.405430773" observedRunningTime="2026-02-04 11:48:23.125359733 +0000 UTC m=+1252.268064118" watchObservedRunningTime="2026-02-04 11:48:23.137538211 +0000 UTC m=+1252.280242596" Feb 04 11:48:23 crc kubenswrapper[4728]: I0204 11:48:23.564235 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5444a70-d5ae-49ec-aa43-b74a2fcdad54" path="/var/lib/kubelet/pods/d5444a70-d5ae-49ec-aa43-b74a2fcdad54/volumes" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.166183 4728 generic.go:334] "Generic (PLEG): container finished" podID="0a5e2fc0-c280-467d-b5e4-793d459cbbab" containerID="f8f5751cd1eff2c59e7e5a7754efe072b0ffa08b19dc60d330c75fb8446e32a6" exitCode=0 Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.166214 4728 generic.go:334] "Generic (PLEG): container finished" podID="0a5e2fc0-c280-467d-b5e4-793d459cbbab" containerID="e97c5a43429f37a1155d2088e0281c61b72aa2e18e873d903c2bd5cbcf76959d" exitCode=143 Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.167547 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a5e2fc0-c280-467d-b5e4-793d459cbbab","Type":"ContainerDied","Data":"f8f5751cd1eff2c59e7e5a7754efe072b0ffa08b19dc60d330c75fb8446e32a6"} Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.167600 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a5e2fc0-c280-467d-b5e4-793d459cbbab","Type":"ContainerDied","Data":"e97c5a43429f37a1155d2088e0281c61b72aa2e18e873d903c2bd5cbcf76959d"} Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.197062 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee","Type":"ContainerStarted","Data":"d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2"} Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.206302 4728 generic.go:334] "Generic (PLEG): container finished" podID="9a4fd92d-0009-486d-97e1-d086721c5336" containerID="3d230cc56b28ff60ddb6098d5fbe195650aca916a5f890cb868ca203b8632cfb" exitCode=0 Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.206340 4728 generic.go:334] "Generic (PLEG): container finished" podID="9a4fd92d-0009-486d-97e1-d086721c5336" containerID="8fe10898df39507557bca5855413225205a1dc482245002f432ee3f81668570f" exitCode=143 Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.206394 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a4fd92d-0009-486d-97e1-d086721c5336","Type":"ContainerDied","Data":"3d230cc56b28ff60ddb6098d5fbe195650aca916a5f890cb868ca203b8632cfb"} Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.206425 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a4fd92d-0009-486d-97e1-d086721c5336","Type":"ContainerDied","Data":"8fe10898df39507557bca5855413225205a1dc482245002f432ee3f81668570f"} Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.210894 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5cfef0c5-481e-49c4-b2e3-f37222f7aa50","Type":"ContainerStarted","Data":"f10c073a48eba532bf5e643e80d3f9ee3b335af090cc1efe68a845739c202d2c"} Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.217981 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.277849 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.2778337779999998 podStartE2EDuration="2.277833778s" podCreationTimestamp="2026-02-04 11:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:24.271437637 +0000 UTC m=+1253.414142032" watchObservedRunningTime="2026-02-04 11:48:24.277833778 +0000 UTC m=+1253.420538163" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.385394 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.567370 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.567634 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.584107 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.588818 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.593930 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.718899 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.773455 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ldgrq"] Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.773684 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" podUID="f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" containerName="dnsmasq-dns" containerID="cri-o://c3a7591d2eb9c34a5c88911faf670445074d147c974108fdbcc248d04dfc58d9" gracePeriod=10 Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.782470 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bdsv\" (UniqueName: \"kubernetes.io/projected/0a5e2fc0-c280-467d-b5e4-793d459cbbab-kube-api-access-5bdsv\") pod \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.782564 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-config-data\") pod \"9a4fd92d-0009-486d-97e1-d086721c5336\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.782594 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a4fd92d-0009-486d-97e1-d086721c5336-logs\") pod \"9a4fd92d-0009-486d-97e1-d086721c5336\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.782671 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-config-data\") pod \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.782706 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pt5f\" (UniqueName: \"kubernetes.io/projected/9a4fd92d-0009-486d-97e1-d086721c5336-kube-api-access-2pt5f\") pod \"9a4fd92d-0009-486d-97e1-d086721c5336\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.782801 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a5e2fc0-c280-467d-b5e4-793d459cbbab-logs\") pod \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.782832 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-combined-ca-bundle\") pod \"9a4fd92d-0009-486d-97e1-d086721c5336\" (UID: \"9a4fd92d-0009-486d-97e1-d086721c5336\") " Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.782871 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-combined-ca-bundle\") pod \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\" (UID: \"0a5e2fc0-c280-467d-b5e4-793d459cbbab\") " Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.791530 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a5e2fc0-c280-467d-b5e4-793d459cbbab-logs" (OuterVolumeSpecName: "logs") pod "0a5e2fc0-c280-467d-b5e4-793d459cbbab" (UID: "0a5e2fc0-c280-467d-b5e4-793d459cbbab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.791803 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a4fd92d-0009-486d-97e1-d086721c5336-logs" (OuterVolumeSpecName: "logs") pod "9a4fd92d-0009-486d-97e1-d086721c5336" (UID: "9a4fd92d-0009-486d-97e1-d086721c5336"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.799504 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a4fd92d-0009-486d-97e1-d086721c5336-kube-api-access-2pt5f" (OuterVolumeSpecName: "kube-api-access-2pt5f") pod "9a4fd92d-0009-486d-97e1-d086721c5336" (UID: "9a4fd92d-0009-486d-97e1-d086721c5336"). InnerVolumeSpecName "kube-api-access-2pt5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.800280 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5e2fc0-c280-467d-b5e4-793d459cbbab-kube-api-access-5bdsv" (OuterVolumeSpecName: "kube-api-access-5bdsv") pod "0a5e2fc0-c280-467d-b5e4-793d459cbbab" (UID: "0a5e2fc0-c280-467d-b5e4-793d459cbbab"). InnerVolumeSpecName "kube-api-access-5bdsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.835196 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-config-data" (OuterVolumeSpecName: "config-data") pod "9a4fd92d-0009-486d-97e1-d086721c5336" (UID: "9a4fd92d-0009-486d-97e1-d086721c5336"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.850163 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-config-data" (OuterVolumeSpecName: "config-data") pod "0a5e2fc0-c280-467d-b5e4-793d459cbbab" (UID: "0a5e2fc0-c280-467d-b5e4-793d459cbbab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.855294 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a5e2fc0-c280-467d-b5e4-793d459cbbab" (UID: "0a5e2fc0-c280-467d-b5e4-793d459cbbab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.864069 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a4fd92d-0009-486d-97e1-d086721c5336" (UID: "9a4fd92d-0009-486d-97e1-d086721c5336"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.886053 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.886080 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pt5f\" (UniqueName: \"kubernetes.io/projected/9a4fd92d-0009-486d-97e1-d086721c5336-kube-api-access-2pt5f\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.886091 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a5e2fc0-c280-467d-b5e4-793d459cbbab-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.886100 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.886109 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5e2fc0-c280-467d-b5e4-793d459cbbab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.886118 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bdsv\" (UniqueName: \"kubernetes.io/projected/0a5e2fc0-c280-467d-b5e4-793d459cbbab-kube-api-access-5bdsv\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.886127 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a4fd92d-0009-486d-97e1-d086721c5336-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:24 crc kubenswrapper[4728]: I0204 11:48:24.886135 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a4fd92d-0009-486d-97e1-d086721c5336-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.222255 4728 generic.go:334] "Generic (PLEG): container finished" podID="fed81957-c76f-4a31-837d-947294fe38a4" containerID="7de2cdd8aa4f383e8d1b12ce3e65304c53384060beca7b2935a4581e53b61461" exitCode=0 Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.222344 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hssm4" event={"ID":"fed81957-c76f-4a31-837d-947294fe38a4","Type":"ContainerDied","Data":"7de2cdd8aa4f383e8d1b12ce3e65304c53384060beca7b2935a4581e53b61461"} Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.226722 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a4fd92d-0009-486d-97e1-d086721c5336","Type":"ContainerDied","Data":"333b57be2e8c31d31c84e3b2acc2ba8bf125621f78b3de5cc31e23239a95be58"} Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.226793 4728 scope.go:117] "RemoveContainer" containerID="3d230cc56b28ff60ddb6098d5fbe195650aca916a5f890cb868ca203b8632cfb" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.226801 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.231560 4728 generic.go:334] "Generic (PLEG): container finished" podID="f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" containerID="c3a7591d2eb9c34a5c88911faf670445074d147c974108fdbcc248d04dfc58d9" exitCode=0 Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.231981 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" event={"ID":"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b","Type":"ContainerDied","Data":"c3a7591d2eb9c34a5c88911faf670445074d147c974108fdbcc248d04dfc58d9"} Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.235895 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.236938 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a5e2fc0-c280-467d-b5e4-793d459cbbab","Type":"ContainerDied","Data":"742a4a0b32f9d2bbe533e4ea0ccfdeba8352af37db54c0eb0772d9b0473fed29"} Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.280013 4728 scope.go:117] "RemoveContainer" containerID="8fe10898df39507557bca5855413225205a1dc482245002f432ee3f81668570f" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.315845 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.317451 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.364906 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.382384 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.389777 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.393368 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:25 crc kubenswrapper[4728]: E0204 11:48:25.393746 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" containerName="init" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.393836 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" containerName="init" Feb 04 11:48:25 crc kubenswrapper[4728]: E0204 11:48:25.393912 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" containerName="dnsmasq-dns" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.393969 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" containerName="dnsmasq-dns" Feb 04 11:48:25 crc kubenswrapper[4728]: E0204 11:48:25.394022 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a4fd92d-0009-486d-97e1-d086721c5336" containerName="nova-metadata-log" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.394116 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a4fd92d-0009-486d-97e1-d086721c5336" containerName="nova-metadata-log" Feb 04 11:48:25 crc kubenswrapper[4728]: E0204 11:48:25.394187 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5e2fc0-c280-467d-b5e4-793d459cbbab" containerName="nova-api-api" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.394236 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5e2fc0-c280-467d-b5e4-793d459cbbab" containerName="nova-api-api" Feb 04 11:48:25 crc kubenswrapper[4728]: E0204 11:48:25.394285 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a4fd92d-0009-486d-97e1-d086721c5336" containerName="nova-metadata-metadata" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.394332 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a4fd92d-0009-486d-97e1-d086721c5336" containerName="nova-metadata-metadata" Feb 04 11:48:25 crc kubenswrapper[4728]: E0204 11:48:25.394386 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5e2fc0-c280-467d-b5e4-793d459cbbab" containerName="nova-api-log" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.394434 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5e2fc0-c280-467d-b5e4-793d459cbbab" containerName="nova-api-log" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.394641 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5e2fc0-c280-467d-b5e4-793d459cbbab" containerName="nova-api-log" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.394712 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a4fd92d-0009-486d-97e1-d086721c5336" containerName="nova-metadata-metadata" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.394788 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" containerName="dnsmasq-dns" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.394844 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5e2fc0-c280-467d-b5e4-793d459cbbab" containerName="nova-api-api" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.394913 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a4fd92d-0009-486d-97e1-d086721c5336" containerName="nova-metadata-log" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.395855 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.397889 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.399589 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.400344 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-nb\") pod \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.400393 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-svc\") pod \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.400484 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-sb\") pod \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.400503 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-swift-storage-0\") pod \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.400551 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9qzz\" (UniqueName: \"kubernetes.io/projected/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-kube-api-access-n9qzz\") pod \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.400650 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-config\") pod \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.400927 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sbzz\" (UniqueName: \"kubernetes.io/projected/24ae7c06-fabb-496c-a457-524541d93aed-kube-api-access-9sbzz\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.400951 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.401035 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.401063 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-config-data\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.401108 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24ae7c06-fabb-496c-a457-524541d93aed-logs\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.408481 4728 scope.go:117] "RemoveContainer" containerID="f8f5751cd1eff2c59e7e5a7754efe072b0ffa08b19dc60d330c75fb8446e32a6" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.409292 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.428830 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.435149 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.441989 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-kube-api-access-n9qzz" (OuterVolumeSpecName: "kube-api-access-n9qzz") pod "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" (UID: "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b"). InnerVolumeSpecName "kube-api-access-n9qzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.442474 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.482902 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.483829 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" (UID: "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.484137 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" (UID: "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.487925 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-config" (OuterVolumeSpecName: "config") pod "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" (UID: "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.502375 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" (UID: "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.502708 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-swift-storage-0\") pod \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\" (UID: \"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b\") " Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503094 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24ae7c06-fabb-496c-a457-524541d93aed-logs\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503195 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sbzz\" (UniqueName: \"kubernetes.io/projected/24ae7c06-fabb-496c-a457-524541d93aed-kube-api-access-9sbzz\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503247 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503299 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503320 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-config-data\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503465 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01246ada-c9e0-4fb7-9869-63750b7c955a-logs\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503498 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503521 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24ae7c06-fabb-496c-a457-524541d93aed-logs\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: W0204 11:48:25.503202 4728 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b/volumes/kubernetes.io~configmap/dns-swift-storage-0 Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503571 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" (UID: "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503558 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-config-data\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503700 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl4f9\" (UniqueName: \"kubernetes.io/projected/01246ada-c9e0-4fb7-9869-63750b7c955a-kube-api-access-rl4f9\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503805 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503830 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503844 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503857 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.503870 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9qzz\" (UniqueName: \"kubernetes.io/projected/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-kube-api-access-n9qzz\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.509252 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.510899 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-config-data\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.517272 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.518107 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" (UID: "f06dc94f-182a-4b89-92c4-bb86ea7b4a0b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.524451 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sbzz\" (UniqueName: \"kubernetes.io/projected/24ae7c06-fabb-496c-a457-524541d93aed-kube-api-access-9sbzz\") pod \"nova-metadata-0\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " pod="openstack/nova-metadata-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.565869 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5e2fc0-c280-467d-b5e4-793d459cbbab" path="/var/lib/kubelet/pods/0a5e2fc0-c280-467d-b5e4-793d459cbbab/volumes" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.566590 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a4fd92d-0009-486d-97e1-d086721c5336" path="/var/lib/kubelet/pods/9a4fd92d-0009-486d-97e1-d086721c5336/volumes" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.586804 4728 scope.go:117] "RemoveContainer" containerID="e97c5a43429f37a1155d2088e0281c61b72aa2e18e873d903c2bd5cbcf76959d" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.605165 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01246ada-c9e0-4fb7-9869-63750b7c955a-logs\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.605239 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl4f9\" (UniqueName: \"kubernetes.io/projected/01246ada-c9e0-4fb7-9869-63750b7c955a-kube-api-access-rl4f9\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.605316 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.605332 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-config-data\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.605400 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.606050 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01246ada-c9e0-4fb7-9869-63750b7c955a-logs\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.612597 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.612731 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-config-data\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.627922 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl4f9\" (UniqueName: \"kubernetes.io/projected/01246ada-c9e0-4fb7-9869-63750b7c955a-kube-api-access-rl4f9\") pod \"nova-api-0\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " pod="openstack/nova-api-0" Feb 04 11:48:25 crc kubenswrapper[4728]: I0204 11:48:25.734114 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:48:26 crc kubenswrapper[4728]: I0204 11:48:26.297202 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:48:26 crc kubenswrapper[4728]: I0204 11:48:26.365601 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" Feb 04 11:48:26 crc kubenswrapper[4728]: I0204 11:48:26.366549 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ldgrq" event={"ID":"f06dc94f-182a-4b89-92c4-bb86ea7b4a0b","Type":"ContainerDied","Data":"8f7b371abd2da64ec40fcbe8b98cef76683cd130cd92b40766f1b1da31d1157e"} Feb 04 11:48:26 crc kubenswrapper[4728]: I0204 11:48:26.366585 4728 scope.go:117] "RemoveContainer" containerID="c3a7591d2eb9c34a5c88911faf670445074d147c974108fdbcc248d04dfc58d9" Feb 04 11:48:26 crc kubenswrapper[4728]: I0204 11:48:26.407036 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ldgrq"] Feb 04 11:48:26 crc kubenswrapper[4728]: I0204 11:48:26.429395 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ldgrq"] Feb 04 11:48:26 crc kubenswrapper[4728]: I0204 11:48:26.543612 4728 scope.go:117] "RemoveContainer" containerID="92bd4a3fdd5a184665059b892cfc5ac60136cac2728f3125e1bb419a4348e139" Feb 04 11:48:26 crc kubenswrapper[4728]: I0204 11:48:26.748076 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:26 crc kubenswrapper[4728]: I0204 11:48:26.870761 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:26 crc kubenswrapper[4728]: I0204 11:48:26.993114 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:26 crc kubenswrapper[4728]: W0204 11:48:26.995694 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01246ada_c9e0_4fb7_9869_63750b7c955a.slice/crio-82fbf21c926e6b025c8b2381f1fc61ece50a5ca6f63257efcea0609f342ebcdb WatchSource:0}: Error finding container 82fbf21c926e6b025c8b2381f1fc61ece50a5ca6f63257efcea0609f342ebcdb: Status 404 returned error can't find the container with id 82fbf21c926e6b025c8b2381f1fc61ece50a5ca6f63257efcea0609f342ebcdb Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.040150 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-scripts\") pod \"fed81957-c76f-4a31-837d-947294fe38a4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.040202 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhcrs\" (UniqueName: \"kubernetes.io/projected/fed81957-c76f-4a31-837d-947294fe38a4-kube-api-access-fhcrs\") pod \"fed81957-c76f-4a31-837d-947294fe38a4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.040267 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-config-data\") pod \"fed81957-c76f-4a31-837d-947294fe38a4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.040356 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-combined-ca-bundle\") pod \"fed81957-c76f-4a31-837d-947294fe38a4\" (UID: \"fed81957-c76f-4a31-837d-947294fe38a4\") " Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.045196 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed81957-c76f-4a31-837d-947294fe38a4-kube-api-access-fhcrs" (OuterVolumeSpecName: "kube-api-access-fhcrs") pod "fed81957-c76f-4a31-837d-947294fe38a4" (UID: "fed81957-c76f-4a31-837d-947294fe38a4"). InnerVolumeSpecName "kube-api-access-fhcrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.047950 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-scripts" (OuterVolumeSpecName: "scripts") pod "fed81957-c76f-4a31-837d-947294fe38a4" (UID: "fed81957-c76f-4a31-837d-947294fe38a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.076534 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fed81957-c76f-4a31-837d-947294fe38a4" (UID: "fed81957-c76f-4a31-837d-947294fe38a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.097949 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-config-data" (OuterVolumeSpecName: "config-data") pod "fed81957-c76f-4a31-837d-947294fe38a4" (UID: "fed81957-c76f-4a31-837d-947294fe38a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.142980 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.143047 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhcrs\" (UniqueName: \"kubernetes.io/projected/fed81957-c76f-4a31-837d-947294fe38a4-kube-api-access-fhcrs\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.143062 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.143075 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed81957-c76f-4a31-837d-947294fe38a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.386268 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24ae7c06-fabb-496c-a457-524541d93aed","Type":"ContainerStarted","Data":"f60b4231eb43b9c4e674936315f47c61b79c17592b06731ecf1d772e92764cf0"} Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.387305 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24ae7c06-fabb-496c-a457-524541d93aed","Type":"ContainerStarted","Data":"4f385ec8f14d33ab30aa5f4edce7fb4acb0b1bb30e888c251fc0250eb61a775e"} Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.387401 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24ae7c06-fabb-496c-a457-524541d93aed","Type":"ContainerStarted","Data":"2f4b970a86c58140ae65658be7b7cca99a086e37988417539dac9aa4b63cff23"} Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.390822 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01246ada-c9e0-4fb7-9869-63750b7c955a","Type":"ContainerStarted","Data":"3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d"} Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.390965 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01246ada-c9e0-4fb7-9869-63750b7c955a","Type":"ContainerStarted","Data":"e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d"} Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.391058 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01246ada-c9e0-4fb7-9869-63750b7c955a","Type":"ContainerStarted","Data":"82fbf21c926e6b025c8b2381f1fc61ece50a5ca6f63257efcea0609f342ebcdb"} Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.394062 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee","Type":"ContainerStarted","Data":"1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5"} Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.394397 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="ceilometer-central-agent" containerID="cri-o://1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6" gracePeriod=30 Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.394433 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="ceilometer-notification-agent" containerID="cri-o://81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec" gracePeriod=30 Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.394432 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="sg-core" containerID="cri-o://d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2" gracePeriod=30 Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.394411 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="proxy-httpd" containerID="cri-o://1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5" gracePeriod=30 Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.397417 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hssm4" event={"ID":"fed81957-c76f-4a31-837d-947294fe38a4","Type":"ContainerDied","Data":"a2807799026ffaeb9cc49f7b20b8b6bf7a1683a869f04c224c3a02671ce55d91"} Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.397537 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2807799026ffaeb9cc49f7b20b8b6bf7a1683a869f04c224c3a02671ce55d91" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.397685 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hssm4" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.433613 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.433587464 podStartE2EDuration="2.433587464s" podCreationTimestamp="2026-02-04 11:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:27.405909939 +0000 UTC m=+1256.548614324" watchObservedRunningTime="2026-02-04 11:48:27.433587464 +0000 UTC m=+1256.576291849" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.446983 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.721606914 podStartE2EDuration="10.446960739s" podCreationTimestamp="2026-02-04 11:48:17 +0000 UTC" firstStartedPulling="2026-02-04 11:48:20.644976793 +0000 UTC m=+1249.787681178" lastFinishedPulling="2026-02-04 11:48:26.370330618 +0000 UTC m=+1255.513035003" observedRunningTime="2026-02-04 11:48:27.432863026 +0000 UTC m=+1256.575567411" watchObservedRunningTime="2026-02-04 11:48:27.446960739 +0000 UTC m=+1256.589665124" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.462810 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.462776154 podStartE2EDuration="2.462776154s" podCreationTimestamp="2026-02-04 11:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:27.452777808 +0000 UTC m=+1256.595482313" watchObservedRunningTime="2026-02-04 11:48:27.462776154 +0000 UTC m=+1256.605480539" Feb 04 11:48:27 crc kubenswrapper[4728]: I0204 11:48:27.563605 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06dc94f-182a-4b89-92c4-bb86ea7b4a0b" path="/var/lib/kubelet/pods/f06dc94f-182a-4b89-92c4-bb86ea7b4a0b/volumes" Feb 04 11:48:28 crc kubenswrapper[4728]: I0204 11:48:28.409630 4728 generic.go:334] "Generic (PLEG): container finished" podID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerID="1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5" exitCode=0 Feb 04 11:48:28 crc kubenswrapper[4728]: I0204 11:48:28.409665 4728 generic.go:334] "Generic (PLEG): container finished" podID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerID="d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2" exitCode=2 Feb 04 11:48:28 crc kubenswrapper[4728]: I0204 11:48:28.409676 4728 generic.go:334] "Generic (PLEG): container finished" podID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerID="81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec" exitCode=0 Feb 04 11:48:28 crc kubenswrapper[4728]: I0204 11:48:28.410529 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee","Type":"ContainerDied","Data":"1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5"} Feb 04 11:48:28 crc kubenswrapper[4728]: I0204 11:48:28.410654 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee","Type":"ContainerDied","Data":"d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2"} Feb 04 11:48:28 crc kubenswrapper[4728]: I0204 11:48:28.410733 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee","Type":"ContainerDied","Data":"81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec"} Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.440974 4728 generic.go:334] "Generic (PLEG): container finished" podID="f9187093-4720-491c-b6a0-8a7bdafab687" containerID="7087b35f7c7b90f96088a2ba96a9769cf267eddd4cb3a1c1b2df04c3649b12b1" exitCode=0 Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.441116 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z8hwn" event={"ID":"f9187093-4720-491c-b6a0-8a7bdafab687","Type":"ContainerDied","Data":"7087b35f7c7b90f96088a2ba96a9769cf267eddd4cb3a1c1b2df04c3649b12b1"} Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.886423 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.895503 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-scripts\") pod \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.895604 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzc9c\" (UniqueName: \"kubernetes.io/projected/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-kube-api-access-xzc9c\") pod \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.895646 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-config-data\") pod \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.895674 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-run-httpd\") pod \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.895698 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-combined-ca-bundle\") pod \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.895817 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-sg-core-conf-yaml\") pod \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.895962 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-log-httpd\") pod \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\" (UID: \"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee\") " Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.896562 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" (UID: "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.896786 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" (UID: "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.902819 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-kube-api-access-xzc9c" (OuterVolumeSpecName: "kube-api-access-xzc9c") pod "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" (UID: "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee"). InnerVolumeSpecName "kube-api-access-xzc9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.902839 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-scripts" (OuterVolumeSpecName: "scripts") pod "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" (UID: "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.958111 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" (UID: "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.997884 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.997936 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzc9c\" (UniqueName: \"kubernetes.io/projected/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-kube-api-access-xzc9c\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.997958 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.997975 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:29 crc kubenswrapper[4728]: I0204 11:48:29.997991 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.015900 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" (UID: "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.022257 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-config-data" (OuterVolumeSpecName: "config-data") pod "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" (UID: "4d415ba3-d5c9-4757-a8c1-dddc5c8903ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.099541 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.099577 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.481698 4728 generic.go:334] "Generic (PLEG): container finished" podID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerID="1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6" exitCode=0 Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.481743 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee","Type":"ContainerDied","Data":"1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6"} Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.481836 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.483104 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d415ba3-d5c9-4757-a8c1-dddc5c8903ee","Type":"ContainerDied","Data":"8ff824792a978874f207fc336c0e50f93e093288266860248132b23d741194ed"} Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.483131 4728 scope.go:117] "RemoveContainer" containerID="1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.515779 4728 scope.go:117] "RemoveContainer" containerID="d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.601550 4728 scope.go:117] "RemoveContainer" containerID="81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.608092 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.642117 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.649068 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:30 crc kubenswrapper[4728]: E0204 11:48:30.649569 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="ceilometer-central-agent" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.649586 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="ceilometer-central-agent" Feb 04 11:48:30 crc kubenswrapper[4728]: E0204 11:48:30.649608 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="proxy-httpd" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.649617 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="proxy-httpd" Feb 04 11:48:30 crc kubenswrapper[4728]: E0204 11:48:30.649630 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed81957-c76f-4a31-837d-947294fe38a4" containerName="nova-manage" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.649639 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed81957-c76f-4a31-837d-947294fe38a4" containerName="nova-manage" Feb 04 11:48:30 crc kubenswrapper[4728]: E0204 11:48:30.649655 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="sg-core" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.649671 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="sg-core" Feb 04 11:48:30 crc kubenswrapper[4728]: E0204 11:48:30.649706 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="ceilometer-notification-agent" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.649714 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="ceilometer-notification-agent" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.650808 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="ceilometer-notification-agent" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.650835 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="sg-core" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.650851 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="fed81957-c76f-4a31-837d-947294fe38a4" containerName="nova-manage" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.650868 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="ceilometer-central-agent" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.650889 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" containerName="proxy-httpd" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.653105 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.658413 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.658468 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.658808 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.689262 4728 scope.go:117] "RemoveContainer" containerID="1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.722471 4728 scope.go:117] "RemoveContainer" containerID="1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5" Feb 04 11:48:30 crc kubenswrapper[4728]: E0204 11:48:30.722812 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5\": container with ID starting with 1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5 not found: ID does not exist" containerID="1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.722842 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5"} err="failed to get container status \"1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5\": rpc error: code = NotFound desc = could not find container \"1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5\": container with ID starting with 1f4a8dc90b5f40f7db8d49de2c738e02a2ead0598bf1681ec7d42d1e60e37ed5 not found: ID does not exist" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.722862 4728 scope.go:117] "RemoveContainer" containerID="d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2" Feb 04 11:48:30 crc kubenswrapper[4728]: E0204 11:48:30.723065 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2\": container with ID starting with d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2 not found: ID does not exist" containerID="d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.723086 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2"} err="failed to get container status \"d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2\": rpc error: code = NotFound desc = could not find container \"d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2\": container with ID starting with d7fc8b9fb27981622fc3e4f694de4b5353237f26f239308a1c0a612bda495ff2 not found: ID does not exist" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.723098 4728 scope.go:117] "RemoveContainer" containerID="81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec" Feb 04 11:48:30 crc kubenswrapper[4728]: E0204 11:48:30.723270 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec\": container with ID starting with 81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec not found: ID does not exist" containerID="81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.723284 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec"} err="failed to get container status \"81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec\": rpc error: code = NotFound desc = could not find container \"81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec\": container with ID starting with 81870b3b047b207f1e658a4b98acc94cac25ac05fb41d3d2723078f6228a85ec not found: ID does not exist" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.723294 4728 scope.go:117] "RemoveContainer" containerID="1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6" Feb 04 11:48:30 crc kubenswrapper[4728]: E0204 11:48:30.723465 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6\": container with ID starting with 1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6 not found: ID does not exist" containerID="1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.723480 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6"} err="failed to get container status \"1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6\": rpc error: code = NotFound desc = could not find container \"1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6\": container with ID starting with 1e84a3b536fc11a10137e6d878f26020f11f7f97c777c982995388df83d9c2a6 not found: ID does not exist" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.734814 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.735531 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.811305 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.811685 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnd28\" (UniqueName: \"kubernetes.io/projected/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-kube-api-access-jnd28\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.811819 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-log-httpd\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.812021 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.812105 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-config-data\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.812217 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-run-httpd\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.812305 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-scripts\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.909955 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.913853 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-config-data\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.913925 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-run-httpd\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.914004 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-scripts\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.914113 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.914361 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnd28\" (UniqueName: \"kubernetes.io/projected/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-kube-api-access-jnd28\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.914389 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-log-httpd\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.914435 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.917104 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-run-httpd\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.917568 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-log-httpd\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.920833 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.921337 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-scripts\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.928977 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-config-data\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.932234 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.949195 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnd28\" (UniqueName: \"kubernetes.io/projected/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-kube-api-access-jnd28\") pod \"ceilometer-0\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " pod="openstack/ceilometer-0" Feb 04 11:48:30 crc kubenswrapper[4728]: I0204 11:48:30.975305 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.015681 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-scripts\") pod \"f9187093-4720-491c-b6a0-8a7bdafab687\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.015769 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-config-data\") pod \"f9187093-4720-491c-b6a0-8a7bdafab687\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.015884 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-combined-ca-bundle\") pod \"f9187093-4720-491c-b6a0-8a7bdafab687\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.015974 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7mh4\" (UniqueName: \"kubernetes.io/projected/f9187093-4720-491c-b6a0-8a7bdafab687-kube-api-access-q7mh4\") pod \"f9187093-4720-491c-b6a0-8a7bdafab687\" (UID: \"f9187093-4720-491c-b6a0-8a7bdafab687\") " Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.019492 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-scripts" (OuterVolumeSpecName: "scripts") pod "f9187093-4720-491c-b6a0-8a7bdafab687" (UID: "f9187093-4720-491c-b6a0-8a7bdafab687"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.020199 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9187093-4720-491c-b6a0-8a7bdafab687-kube-api-access-q7mh4" (OuterVolumeSpecName: "kube-api-access-q7mh4") pod "f9187093-4720-491c-b6a0-8a7bdafab687" (UID: "f9187093-4720-491c-b6a0-8a7bdafab687"). InnerVolumeSpecName "kube-api-access-q7mh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.044656 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9187093-4720-491c-b6a0-8a7bdafab687" (UID: "f9187093-4720-491c-b6a0-8a7bdafab687"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.054484 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-config-data" (OuterVolumeSpecName: "config-data") pod "f9187093-4720-491c-b6a0-8a7bdafab687" (UID: "f9187093-4720-491c-b6a0-8a7bdafab687"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.118780 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.119124 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.119140 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9187093-4720-491c-b6a0-8a7bdafab687-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.119153 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7mh4\" (UniqueName: \"kubernetes.io/projected/f9187093-4720-491c-b6a0-8a7bdafab687-kube-api-access-q7mh4\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.399663 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.503993 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z8hwn" event={"ID":"f9187093-4720-491c-b6a0-8a7bdafab687","Type":"ContainerDied","Data":"e1eef790bc52d1d0179001fa919e39d842a81cf4e831d0059d652d12916e55e7"} Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.504047 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1eef790bc52d1d0179001fa919e39d842a81cf4e831d0059d652d12916e55e7" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.504010 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z8hwn" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.506884 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b3dbbb6-a809-492a-b6c4-42741f7d1b43","Type":"ContainerStarted","Data":"3dce84795bb4facd8330177ba5cf97ec968bfb955657b79e677eabe6a623b6a4"} Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.598943 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d415ba3-d5c9-4757-a8c1-dddc5c8903ee" path="/var/lib/kubelet/pods/4d415ba3-d5c9-4757-a8c1-dddc5c8903ee/volumes" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.600500 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 04 11:48:31 crc kubenswrapper[4728]: E0204 11:48:31.600918 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9187093-4720-491c-b6a0-8a7bdafab687" containerName="nova-cell1-conductor-db-sync" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.600938 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9187093-4720-491c-b6a0-8a7bdafab687" containerName="nova-cell1-conductor-db-sync" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.601427 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9187093-4720-491c-b6a0-8a7bdafab687" containerName="nova-cell1-conductor-db-sync" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.602389 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.602539 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.604813 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.741397 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a095a869-5d4b-4061-b13e-3d3f7f4c27ba-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a095a869-5d4b-4061-b13e-3d3f7f4c27ba\") " pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.741553 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a095a869-5d4b-4061-b13e-3d3f7f4c27ba-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a095a869-5d4b-4061-b13e-3d3f7f4c27ba\") " pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.741917 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7kdp\" (UniqueName: \"kubernetes.io/projected/a095a869-5d4b-4061-b13e-3d3f7f4c27ba-kube-api-access-z7kdp\") pod \"nova-cell1-conductor-0\" (UID: \"a095a869-5d4b-4061-b13e-3d3f7f4c27ba\") " pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.843887 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7kdp\" (UniqueName: \"kubernetes.io/projected/a095a869-5d4b-4061-b13e-3d3f7f4c27ba-kube-api-access-z7kdp\") pod \"nova-cell1-conductor-0\" (UID: \"a095a869-5d4b-4061-b13e-3d3f7f4c27ba\") " pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.844015 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a095a869-5d4b-4061-b13e-3d3f7f4c27ba-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a095a869-5d4b-4061-b13e-3d3f7f4c27ba\") " pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.844054 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a095a869-5d4b-4061-b13e-3d3f7f4c27ba-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a095a869-5d4b-4061-b13e-3d3f7f4c27ba\") " pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.847737 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a095a869-5d4b-4061-b13e-3d3f7f4c27ba-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a095a869-5d4b-4061-b13e-3d3f7f4c27ba\") " pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.847924 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a095a869-5d4b-4061-b13e-3d3f7f4c27ba-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a095a869-5d4b-4061-b13e-3d3f7f4c27ba\") " pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.863530 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7kdp\" (UniqueName: \"kubernetes.io/projected/a095a869-5d4b-4061-b13e-3d3f7f4c27ba-kube-api-access-z7kdp\") pod \"nova-cell1-conductor-0\" (UID: \"a095a869-5d4b-4061-b13e-3d3f7f4c27ba\") " pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:31 crc kubenswrapper[4728]: I0204 11:48:31.925498 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:32 crc kubenswrapper[4728]: I0204 11:48:32.388328 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 04 11:48:32 crc kubenswrapper[4728]: W0204 11:48:32.389618 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda095a869_5d4b_4061_b13e_3d3f7f4c27ba.slice/crio-c688c177aef46e5efcfe5d6cb3405b2443831631779b19aa1b3bdc0b9dff8141 WatchSource:0}: Error finding container c688c177aef46e5efcfe5d6cb3405b2443831631779b19aa1b3bdc0b9dff8141: Status 404 returned error can't find the container with id c688c177aef46e5efcfe5d6cb3405b2443831631779b19aa1b3bdc0b9dff8141 Feb 04 11:48:32 crc kubenswrapper[4728]: I0204 11:48:32.515267 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 04 11:48:32 crc kubenswrapper[4728]: I0204 11:48:32.519429 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a095a869-5d4b-4061-b13e-3d3f7f4c27ba","Type":"ContainerStarted","Data":"c688c177aef46e5efcfe5d6cb3405b2443831631779b19aa1b3bdc0b9dff8141"} Feb 04 11:48:32 crc kubenswrapper[4728]: I0204 11:48:32.520961 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b3dbbb6-a809-492a-b6c4-42741f7d1b43","Type":"ContainerStarted","Data":"977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953"} Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.213414 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.213974 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="01246ada-c9e0-4fb7-9869-63750b7c955a" containerName="nova-api-log" containerID="cri-o://e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d" gracePeriod=30 Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.214106 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="01246ada-c9e0-4fb7-9869-63750b7c955a" containerName="nova-api-api" containerID="cri-o://3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d" gracePeriod=30 Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.301546 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.302201 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="24ae7c06-fabb-496c-a457-524541d93aed" containerName="nova-metadata-log" containerID="cri-o://4f385ec8f14d33ab30aa5f4edce7fb4acb0b1bb30e888c251fc0250eb61a775e" gracePeriod=30 Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.302239 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="24ae7c06-fabb-496c-a457-524541d93aed" containerName="nova-metadata-metadata" containerID="cri-o://f60b4231eb43b9c4e674936315f47c61b79c17592b06731ecf1d772e92764cf0" gracePeriod=30 Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.537465 4728 generic.go:334] "Generic (PLEG): container finished" podID="01246ada-c9e0-4fb7-9869-63750b7c955a" containerID="e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d" exitCode=143 Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.537555 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01246ada-c9e0-4fb7-9869-63750b7c955a","Type":"ContainerDied","Data":"e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d"} Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.544124 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b3dbbb6-a809-492a-b6c4-42741f7d1b43","Type":"ContainerStarted","Data":"63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7"} Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.544170 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b3dbbb6-a809-492a-b6c4-42741f7d1b43","Type":"ContainerStarted","Data":"ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b"} Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.547607 4728 generic.go:334] "Generic (PLEG): container finished" podID="24ae7c06-fabb-496c-a457-524541d93aed" containerID="f60b4231eb43b9c4e674936315f47c61b79c17592b06731ecf1d772e92764cf0" exitCode=0 Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.547631 4728 generic.go:334] "Generic (PLEG): container finished" podID="24ae7c06-fabb-496c-a457-524541d93aed" containerID="4f385ec8f14d33ab30aa5f4edce7fb4acb0b1bb30e888c251fc0250eb61a775e" exitCode=143 Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.547665 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24ae7c06-fabb-496c-a457-524541d93aed","Type":"ContainerDied","Data":"f60b4231eb43b9c4e674936315f47c61b79c17592b06731ecf1d772e92764cf0"} Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.547684 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24ae7c06-fabb-496c-a457-524541d93aed","Type":"ContainerDied","Data":"4f385ec8f14d33ab30aa5f4edce7fb4acb0b1bb30e888c251fc0250eb61a775e"} Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.551446 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a095a869-5d4b-4061-b13e-3d3f7f4c27ba","Type":"ContainerStarted","Data":"0329c98593e699682ba270d59d849847fe9a50355f98987a103cfb2a898be843"} Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.563514 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:33 crc kubenswrapper[4728]: I0204 11:48:33.579338 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5793099059999998 podStartE2EDuration="2.579309906s" podCreationTimestamp="2026-02-04 11:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:33.574027821 +0000 UTC m=+1262.716732216" watchObservedRunningTime="2026-02-04 11:48:33.579309906 +0000 UTC m=+1262.722014291" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.018345 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.198563 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.198935 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-combined-ca-bundle\") pod \"24ae7c06-fabb-496c-a457-524541d93aed\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.198980 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sbzz\" (UniqueName: \"kubernetes.io/projected/24ae7c06-fabb-496c-a457-524541d93aed-kube-api-access-9sbzz\") pod \"24ae7c06-fabb-496c-a457-524541d93aed\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.199137 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-config-data\") pod \"24ae7c06-fabb-496c-a457-524541d93aed\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.199226 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-nova-metadata-tls-certs\") pod \"24ae7c06-fabb-496c-a457-524541d93aed\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.199245 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24ae7c06-fabb-496c-a457-524541d93aed-logs\") pod \"24ae7c06-fabb-496c-a457-524541d93aed\" (UID: \"24ae7c06-fabb-496c-a457-524541d93aed\") " Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.199968 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ae7c06-fabb-496c-a457-524541d93aed-logs" (OuterVolumeSpecName: "logs") pod "24ae7c06-fabb-496c-a457-524541d93aed" (UID: "24ae7c06-fabb-496c-a457-524541d93aed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.206063 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ae7c06-fabb-496c-a457-524541d93aed-kube-api-access-9sbzz" (OuterVolumeSpecName: "kube-api-access-9sbzz") pod "24ae7c06-fabb-496c-a457-524541d93aed" (UID: "24ae7c06-fabb-496c-a457-524541d93aed"). InnerVolumeSpecName "kube-api-access-9sbzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.235739 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-config-data" (OuterVolumeSpecName: "config-data") pod "24ae7c06-fabb-496c-a457-524541d93aed" (UID: "24ae7c06-fabb-496c-a457-524541d93aed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.253457 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24ae7c06-fabb-496c-a457-524541d93aed" (UID: "24ae7c06-fabb-496c-a457-524541d93aed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.266417 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "24ae7c06-fabb-496c-a457-524541d93aed" (UID: "24ae7c06-fabb-496c-a457-524541d93aed"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.300567 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01246ada-c9e0-4fb7-9869-63750b7c955a-logs\") pod \"01246ada-c9e0-4fb7-9869-63750b7c955a\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.300928 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-combined-ca-bundle\") pod \"01246ada-c9e0-4fb7-9869-63750b7c955a\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.301008 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01246ada-c9e0-4fb7-9869-63750b7c955a-logs" (OuterVolumeSpecName: "logs") pod "01246ada-c9e0-4fb7-9869-63750b7c955a" (UID: "01246ada-c9e0-4fb7-9869-63750b7c955a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.301153 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-config-data\") pod \"01246ada-c9e0-4fb7-9869-63750b7c955a\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.301335 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl4f9\" (UniqueName: \"kubernetes.io/projected/01246ada-c9e0-4fb7-9869-63750b7c955a-kube-api-access-rl4f9\") pod \"01246ada-c9e0-4fb7-9869-63750b7c955a\" (UID: \"01246ada-c9e0-4fb7-9869-63750b7c955a\") " Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.302031 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.302139 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24ae7c06-fabb-496c-a457-524541d93aed-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.302221 4728 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.302300 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ae7c06-fabb-496c-a457-524541d93aed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.302380 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01246ada-c9e0-4fb7-9869-63750b7c955a-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.302465 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sbzz\" (UniqueName: \"kubernetes.io/projected/24ae7c06-fabb-496c-a457-524541d93aed-kube-api-access-9sbzz\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.304941 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01246ada-c9e0-4fb7-9869-63750b7c955a-kube-api-access-rl4f9" (OuterVolumeSpecName: "kube-api-access-rl4f9") pod "01246ada-c9e0-4fb7-9869-63750b7c955a" (UID: "01246ada-c9e0-4fb7-9869-63750b7c955a"). InnerVolumeSpecName "kube-api-access-rl4f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.327086 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01246ada-c9e0-4fb7-9869-63750b7c955a" (UID: "01246ada-c9e0-4fb7-9869-63750b7c955a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.330189 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-config-data" (OuterVolumeSpecName: "config-data") pod "01246ada-c9e0-4fb7-9869-63750b7c955a" (UID: "01246ada-c9e0-4fb7-9869-63750b7c955a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.404416 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl4f9\" (UniqueName: \"kubernetes.io/projected/01246ada-c9e0-4fb7-9869-63750b7c955a-kube-api-access-rl4f9\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.404635 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.404735 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01246ada-c9e0-4fb7-9869-63750b7c955a-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.564064 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.564545 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24ae7c06-fabb-496c-a457-524541d93aed","Type":"ContainerDied","Data":"2f4b970a86c58140ae65658be7b7cca99a086e37988417539dac9aa4b63cff23"} Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.564597 4728 scope.go:117] "RemoveContainer" containerID="f60b4231eb43b9c4e674936315f47c61b79c17592b06731ecf1d772e92764cf0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.571941 4728 generic.go:334] "Generic (PLEG): container finished" podID="01246ada-c9e0-4fb7-9869-63750b7c955a" containerID="3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d" exitCode=0 Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.572248 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.572926 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01246ada-c9e0-4fb7-9869-63750b7c955a","Type":"ContainerDied","Data":"3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d"} Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.573072 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01246ada-c9e0-4fb7-9869-63750b7c955a","Type":"ContainerDied","Data":"82fbf21c926e6b025c8b2381f1fc61ece50a5ca6f63257efcea0609f342ebcdb"} Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.610096 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.636911 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.641989 4728 scope.go:117] "RemoveContainer" containerID="4f385ec8f14d33ab30aa5f4edce7fb4acb0b1bb30e888c251fc0250eb61a775e" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.650556 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:34 crc kubenswrapper[4728]: E0204 11:48:34.651066 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ae7c06-fabb-496c-a457-524541d93aed" containerName="nova-metadata-log" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.651089 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ae7c06-fabb-496c-a457-524541d93aed" containerName="nova-metadata-log" Feb 04 11:48:34 crc kubenswrapper[4728]: E0204 11:48:34.651107 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01246ada-c9e0-4fb7-9869-63750b7c955a" containerName="nova-api-api" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.651114 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="01246ada-c9e0-4fb7-9869-63750b7c955a" containerName="nova-api-api" Feb 04 11:48:34 crc kubenswrapper[4728]: E0204 11:48:34.651151 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ae7c06-fabb-496c-a457-524541d93aed" containerName="nova-metadata-metadata" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.651160 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ae7c06-fabb-496c-a457-524541d93aed" containerName="nova-metadata-metadata" Feb 04 11:48:34 crc kubenswrapper[4728]: E0204 11:48:34.651178 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01246ada-c9e0-4fb7-9869-63750b7c955a" containerName="nova-api-log" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.651185 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="01246ada-c9e0-4fb7-9869-63750b7c955a" containerName="nova-api-log" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.651393 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="01246ada-c9e0-4fb7-9869-63750b7c955a" containerName="nova-api-log" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.651419 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ae7c06-fabb-496c-a457-524541d93aed" containerName="nova-metadata-log" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.651433 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ae7c06-fabb-496c-a457-524541d93aed" containerName="nova-metadata-metadata" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.651448 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="01246ada-c9e0-4fb7-9869-63750b7c955a" containerName="nova-api-api" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.652673 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.663884 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.670146 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.682566 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.695873 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.708625 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.711419 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.717232 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.717290 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.721834 4728 scope.go:117] "RemoveContainer" containerID="3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.752692 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.791622 4728 scope.go:117] "RemoveContainer" containerID="e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.819952 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.820304 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-config-data\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.820362 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee5b119d-5791-4812-9ac7-f86ed1d734c8-logs\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.820379 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f225cf5-e1d8-4c10-893b-a497f9959caf-logs\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.820409 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.820434 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-config-data\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.820462 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.820488 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjsw\" (UniqueName: \"kubernetes.io/projected/ee5b119d-5791-4812-9ac7-f86ed1d734c8-kube-api-access-4qjsw\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.820549 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bngdz\" (UniqueName: \"kubernetes.io/projected/3f225cf5-e1d8-4c10-893b-a497f9959caf-kube-api-access-bngdz\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.836309 4728 scope.go:117] "RemoveContainer" containerID="3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d" Feb 04 11:48:34 crc kubenswrapper[4728]: E0204 11:48:34.837129 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d\": container with ID starting with 3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d not found: ID does not exist" containerID="3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.837217 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d"} err="failed to get container status \"3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d\": rpc error: code = NotFound desc = could not find container \"3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d\": container with ID starting with 3b85296019e1e9e94dff5c74acf17123abd082fd0f5541873e4799ca09bcf32d not found: ID does not exist" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.837307 4728 scope.go:117] "RemoveContainer" containerID="e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d" Feb 04 11:48:34 crc kubenswrapper[4728]: E0204 11:48:34.839433 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d\": container with ID starting with e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d not found: ID does not exist" containerID="e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.839537 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d"} err="failed to get container status \"e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d\": rpc error: code = NotFound desc = could not find container \"e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d\": container with ID starting with e347bd50f33cf47076a9c8e64889cf1b08d10b757e14113df860c855e46c664d not found: ID does not exist" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.922229 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-config-data\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.922673 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.922804 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qjsw\" (UniqueName: \"kubernetes.io/projected/ee5b119d-5791-4812-9ac7-f86ed1d734c8-kube-api-access-4qjsw\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.922945 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bngdz\" (UniqueName: \"kubernetes.io/projected/3f225cf5-e1d8-4c10-893b-a497f9959caf-kube-api-access-bngdz\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.923403 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.923902 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-config-data\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.924059 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee5b119d-5791-4812-9ac7-f86ed1d734c8-logs\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.924157 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f225cf5-e1d8-4c10-893b-a497f9959caf-logs\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.924265 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.924484 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee5b119d-5791-4812-9ac7-f86ed1d734c8-logs\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.924849 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f225cf5-e1d8-4c10-893b-a497f9959caf-logs\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.927513 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.927807 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.928336 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-config-data\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.935574 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.937987 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-config-data\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.945854 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qjsw\" (UniqueName: \"kubernetes.io/projected/ee5b119d-5791-4812-9ac7-f86ed1d734c8-kube-api-access-4qjsw\") pod \"nova-api-0\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " pod="openstack/nova-api-0" Feb 04 11:48:34 crc kubenswrapper[4728]: I0204 11:48:34.956614 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bngdz\" (UniqueName: \"kubernetes.io/projected/3f225cf5-e1d8-4c10-893b-a497f9959caf-kube-api-access-bngdz\") pod \"nova-metadata-0\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " pod="openstack/nova-metadata-0" Feb 04 11:48:35 crc kubenswrapper[4728]: I0204 11:48:35.022601 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:48:35 crc kubenswrapper[4728]: I0204 11:48:35.035219 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:48:35 crc kubenswrapper[4728]: I0204 11:48:35.544519 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:48:35 crc kubenswrapper[4728]: I0204 11:48:35.567998 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01246ada-c9e0-4fb7-9869-63750b7c955a" path="/var/lib/kubelet/pods/01246ada-c9e0-4fb7-9869-63750b7c955a/volumes" Feb 04 11:48:35 crc kubenswrapper[4728]: I0204 11:48:35.568894 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ae7c06-fabb-496c-a457-524541d93aed" path="/var/lib/kubelet/pods/24ae7c06-fabb-496c-a457-524541d93aed/volumes" Feb 04 11:48:35 crc kubenswrapper[4728]: I0204 11:48:35.587235 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f225cf5-e1d8-4c10-893b-a497f9959caf","Type":"ContainerStarted","Data":"cd9d9eacc9d17980711c3f3b0f6924fc750c67e56b614730faf9fb811c6cde9b"} Feb 04 11:48:35 crc kubenswrapper[4728]: I0204 11:48:35.639323 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:36 crc kubenswrapper[4728]: I0204 11:48:36.616344 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f225cf5-e1d8-4c10-893b-a497f9959caf","Type":"ContainerStarted","Data":"9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad"} Feb 04 11:48:36 crc kubenswrapper[4728]: I0204 11:48:36.617181 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f225cf5-e1d8-4c10-893b-a497f9959caf","Type":"ContainerStarted","Data":"1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36"} Feb 04 11:48:36 crc kubenswrapper[4728]: I0204 11:48:36.620110 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee5b119d-5791-4812-9ac7-f86ed1d734c8","Type":"ContainerStarted","Data":"bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6"} Feb 04 11:48:36 crc kubenswrapper[4728]: I0204 11:48:36.620154 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee5b119d-5791-4812-9ac7-f86ed1d734c8","Type":"ContainerStarted","Data":"fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2"} Feb 04 11:48:36 crc kubenswrapper[4728]: I0204 11:48:36.620163 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee5b119d-5791-4812-9ac7-f86ed1d734c8","Type":"ContainerStarted","Data":"1a7974f3a014572cf79125426cef70584fc8f2e12a59878f40822977fdde0331"} Feb 04 11:48:36 crc kubenswrapper[4728]: I0204 11:48:36.636504 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b3dbbb6-a809-492a-b6c4-42741f7d1b43","Type":"ContainerStarted","Data":"1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd"} Feb 04 11:48:36 crc kubenswrapper[4728]: I0204 11:48:36.637006 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 11:48:36 crc kubenswrapper[4728]: I0204 11:48:36.654683 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.654659032 podStartE2EDuration="2.654659032s" podCreationTimestamp="2026-02-04 11:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:36.635630281 +0000 UTC m=+1265.778334666" watchObservedRunningTime="2026-02-04 11:48:36.654659032 +0000 UTC m=+1265.797363427" Feb 04 11:48:36 crc kubenswrapper[4728]: I0204 11:48:36.666541 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.666516341 podStartE2EDuration="2.666516341s" podCreationTimestamp="2026-02-04 11:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:36.657959179 +0000 UTC m=+1265.800663564" watchObservedRunningTime="2026-02-04 11:48:36.666516341 +0000 UTC m=+1265.809220726" Feb 04 11:48:36 crc kubenswrapper[4728]: I0204 11:48:36.681110 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.984965493 podStartE2EDuration="6.681083616s" podCreationTimestamp="2026-02-04 11:48:30 +0000 UTC" firstStartedPulling="2026-02-04 11:48:31.410583984 +0000 UTC m=+1260.553288409" lastFinishedPulling="2026-02-04 11:48:36.106702127 +0000 UTC m=+1265.249406532" observedRunningTime="2026-02-04 11:48:36.678638308 +0000 UTC m=+1265.821342723" watchObservedRunningTime="2026-02-04 11:48:36.681083616 +0000 UTC m=+1265.823788001" Feb 04 11:48:40 crc kubenswrapper[4728]: I0204 11:48:40.036047 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 11:48:40 crc kubenswrapper[4728]: I0204 11:48:40.036325 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 11:48:41 crc kubenswrapper[4728]: I0204 11:48:41.951175 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 04 11:48:45 crc kubenswrapper[4728]: I0204 11:48:45.023211 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 11:48:45 crc kubenswrapper[4728]: I0204 11:48:45.023516 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 11:48:45 crc kubenswrapper[4728]: I0204 11:48:45.035701 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 04 11:48:45 crc kubenswrapper[4728]: I0204 11:48:45.035743 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 04 11:48:46 crc kubenswrapper[4728]: I0204 11:48:46.064104 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 11:48:46 crc kubenswrapper[4728]: I0204 11:48:46.118004 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 11:48:46 crc kubenswrapper[4728]: I0204 11:48:46.118060 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 11:48:46 crc kubenswrapper[4728]: I0204 11:48:46.118171 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.629376 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.635362 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.781581 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfv8g\" (UniqueName: \"kubernetes.io/projected/579a7c27-d276-4066-b865-2f621e74410d-kube-api-access-kfv8g\") pod \"579a7c27-d276-4066-b865-2f621e74410d\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.781829 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-config-data\") pod \"5211fb35-aec6-47a3-b309-ec02052d52c0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.781886 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-combined-ca-bundle\") pod \"579a7c27-d276-4066-b865-2f621e74410d\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.782033 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhkvh\" (UniqueName: \"kubernetes.io/projected/5211fb35-aec6-47a3-b309-ec02052d52c0-kube-api-access-xhkvh\") pod \"5211fb35-aec6-47a3-b309-ec02052d52c0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.782204 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-config-data\") pod \"579a7c27-d276-4066-b865-2f621e74410d\" (UID: \"579a7c27-d276-4066-b865-2f621e74410d\") " Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.782282 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-combined-ca-bundle\") pod \"5211fb35-aec6-47a3-b309-ec02052d52c0\" (UID: \"5211fb35-aec6-47a3-b309-ec02052d52c0\") " Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.785770 4728 generic.go:334] "Generic (PLEG): container finished" podID="579a7c27-d276-4066-b865-2f621e74410d" containerID="9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923" exitCode=137 Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.785870 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"579a7c27-d276-4066-b865-2f621e74410d","Type":"ContainerDied","Data":"9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923"} Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.785953 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.786083 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"579a7c27-d276-4066-b865-2f621e74410d","Type":"ContainerDied","Data":"071f6e2416213c4b6980a3bdd13eafe8272be5a8ca5b8cabb541a7d2d661fefa"} Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.786090 4728 scope.go:117] "RemoveContainer" containerID="9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.789053 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5211fb35-aec6-47a3-b309-ec02052d52c0-kube-api-access-xhkvh" (OuterVolumeSpecName: "kube-api-access-xhkvh") pod "5211fb35-aec6-47a3-b309-ec02052d52c0" (UID: "5211fb35-aec6-47a3-b309-ec02052d52c0"). InnerVolumeSpecName "kube-api-access-xhkvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.789388 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579a7c27-d276-4066-b865-2f621e74410d-kube-api-access-kfv8g" (OuterVolumeSpecName: "kube-api-access-kfv8g") pod "579a7c27-d276-4066-b865-2f621e74410d" (UID: "579a7c27-d276-4066-b865-2f621e74410d"). InnerVolumeSpecName "kube-api-access-kfv8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.789653 4728 generic.go:334] "Generic (PLEG): container finished" podID="5211fb35-aec6-47a3-b309-ec02052d52c0" containerID="b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2" exitCode=137 Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.789688 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5211fb35-aec6-47a3-b309-ec02052d52c0","Type":"ContainerDied","Data":"b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2"} Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.789712 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5211fb35-aec6-47a3-b309-ec02052d52c0","Type":"ContainerDied","Data":"6de6408cabbf658fec25b341fb4f97bbcab06cd157bd328ecb0581922c7f1352"} Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.789793 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.810206 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-config-data" (OuterVolumeSpecName: "config-data") pod "5211fb35-aec6-47a3-b309-ec02052d52c0" (UID: "5211fb35-aec6-47a3-b309-ec02052d52c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.814695 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "579a7c27-d276-4066-b865-2f621e74410d" (UID: "579a7c27-d276-4066-b865-2f621e74410d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.817351 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-config-data" (OuterVolumeSpecName: "config-data") pod "579a7c27-d276-4066-b865-2f621e74410d" (UID: "579a7c27-d276-4066-b865-2f621e74410d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.822043 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5211fb35-aec6-47a3-b309-ec02052d52c0" (UID: "5211fb35-aec6-47a3-b309-ec02052d52c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.859222 4728 scope.go:117] "RemoveContainer" containerID="9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923" Feb 04 11:48:52 crc kubenswrapper[4728]: E0204 11:48:52.859595 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923\": container with ID starting with 9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923 not found: ID does not exist" containerID="9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.859636 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923"} err="failed to get container status \"9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923\": rpc error: code = NotFound desc = could not find container \"9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923\": container with ID starting with 9424355f8c55c340d5b1f69aa80065f36b329b82dd82fcf3f10ec96e8b3e2923 not found: ID does not exist" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.859657 4728 scope.go:117] "RemoveContainer" containerID="b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.884857 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.884904 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.884919 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfv8g\" (UniqueName: \"kubernetes.io/projected/579a7c27-d276-4066-b865-2f621e74410d-kube-api-access-kfv8g\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.884930 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5211fb35-aec6-47a3-b309-ec02052d52c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.884941 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579a7c27-d276-4066-b865-2f621e74410d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.884951 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhkvh\" (UniqueName: \"kubernetes.io/projected/5211fb35-aec6-47a3-b309-ec02052d52c0-kube-api-access-xhkvh\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.888084 4728 scope.go:117] "RemoveContainer" containerID="b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2" Feb 04 11:48:52 crc kubenswrapper[4728]: E0204 11:48:52.889940 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2\": container with ID starting with b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2 not found: ID does not exist" containerID="b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2" Feb 04 11:48:52 crc kubenswrapper[4728]: I0204 11:48:52.889982 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2"} err="failed to get container status \"b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2\": rpc error: code = NotFound desc = could not find container \"b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2\": container with ID starting with b38d458b34aa58a4eb2d32e45595e591e8e35c4dc4fadf1b807b87ee51ca9ad2 not found: ID does not exist" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.129872 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.138856 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.148869 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.164601 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.179596 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 11:48:53 crc kubenswrapper[4728]: E0204 11:48:53.180368 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5211fb35-aec6-47a3-b309-ec02052d52c0" containerName="nova-scheduler-scheduler" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.180391 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5211fb35-aec6-47a3-b309-ec02052d52c0" containerName="nova-scheduler-scheduler" Feb 04 11:48:53 crc kubenswrapper[4728]: E0204 11:48:53.180416 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579a7c27-d276-4066-b865-2f621e74410d" containerName="nova-cell1-novncproxy-novncproxy" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.180423 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="579a7c27-d276-4066-b865-2f621e74410d" containerName="nova-cell1-novncproxy-novncproxy" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.180633 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="5211fb35-aec6-47a3-b309-ec02052d52c0" containerName="nova-scheduler-scheduler" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.180660 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="579a7c27-d276-4066-b865-2f621e74410d" containerName="nova-cell1-novncproxy-novncproxy" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.181495 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.183191 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.184099 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.184300 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.196860 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.199513 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.200946 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.202628 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.218121 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.292646 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.293081 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.293116 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbksg\" (UniqueName: \"kubernetes.io/projected/d8a68003-f2e4-4e86-9b51-9d6368701cf7-kube-api-access-gbksg\") pod \"nova-scheduler-0\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.293175 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.293191 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.293279 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.293316 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsnvh\" (UniqueName: \"kubernetes.io/projected/eb3d9dc1-c306-408a-ae51-d025cf731399-kube-api-access-hsnvh\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.293377 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-config-data\") pod \"nova-scheduler-0\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.395620 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-config-data\") pod \"nova-scheduler-0\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.395726 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.395809 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.395839 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbksg\" (UniqueName: \"kubernetes.io/projected/d8a68003-f2e4-4e86-9b51-9d6368701cf7-kube-api-access-gbksg\") pod \"nova-scheduler-0\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.395894 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.395919 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.395988 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.396013 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsnvh\" (UniqueName: \"kubernetes.io/projected/eb3d9dc1-c306-408a-ae51-d025cf731399-kube-api-access-hsnvh\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.400959 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.401505 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.401922 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-config-data\") pod \"nova-scheduler-0\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.409567 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.409660 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.412538 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d9dc1-c306-408a-ae51-d025cf731399-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.413940 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbksg\" (UniqueName: \"kubernetes.io/projected/d8a68003-f2e4-4e86-9b51-9d6368701cf7-kube-api-access-gbksg\") pod \"nova-scheduler-0\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " pod="openstack/nova-scheduler-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.422538 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsnvh\" (UniqueName: \"kubernetes.io/projected/eb3d9dc1-c306-408a-ae51-d025cf731399-kube-api-access-hsnvh\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb3d9dc1-c306-408a-ae51-d025cf731399\") " pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.564714 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.572348 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5211fb35-aec6-47a3-b309-ec02052d52c0" path="/var/lib/kubelet/pods/5211fb35-aec6-47a3-b309-ec02052d52c0/volumes" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.572807 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 11:48:53 crc kubenswrapper[4728]: I0204 11:48:53.573264 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579a7c27-d276-4066-b865-2f621e74410d" path="/var/lib/kubelet/pods/579a7c27-d276-4066-b865-2f621e74410d/volumes" Feb 04 11:48:54 crc kubenswrapper[4728]: W0204 11:48:54.054871 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb3d9dc1_c306_408a_ae51_d025cf731399.slice/crio-0286b3268f0593e6f665cb4eb7809d7f568394aa621021699cf30946ff1d7ab2 WatchSource:0}: Error finding container 0286b3268f0593e6f665cb4eb7809d7f568394aa621021699cf30946ff1d7ab2: Status 404 returned error can't find the container with id 0286b3268f0593e6f665cb4eb7809d7f568394aa621021699cf30946ff1d7ab2 Feb 04 11:48:54 crc kubenswrapper[4728]: I0204 11:48:54.055328 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 04 11:48:54 crc kubenswrapper[4728]: I0204 11:48:54.109136 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:48:54 crc kubenswrapper[4728]: W0204 11:48:54.126387 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8a68003_f2e4_4e86_9b51_9d6368701cf7.slice/crio-537412063a1df720a4ae9052c735633ceb1de1d9b197b58fe1d743d0ee2b864a WatchSource:0}: Error finding container 537412063a1df720a4ae9052c735633ceb1de1d9b197b58fe1d743d0ee2b864a: Status 404 returned error can't find the container with id 537412063a1df720a4ae9052c735633ceb1de1d9b197b58fe1d743d0ee2b864a Feb 04 11:48:54 crc kubenswrapper[4728]: I0204 11:48:54.813876 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb3d9dc1-c306-408a-ae51-d025cf731399","Type":"ContainerStarted","Data":"a335ec39aaf44af2b4e2e9509744226796b2c297c6a8c2eb461e22b309eec1ef"} Feb 04 11:48:54 crc kubenswrapper[4728]: I0204 11:48:54.814223 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb3d9dc1-c306-408a-ae51-d025cf731399","Type":"ContainerStarted","Data":"0286b3268f0593e6f665cb4eb7809d7f568394aa621021699cf30946ff1d7ab2"} Feb 04 11:48:54 crc kubenswrapper[4728]: I0204 11:48:54.815629 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8a68003-f2e4-4e86-9b51-9d6368701cf7","Type":"ContainerStarted","Data":"ff5d66c0625af6ed41a623b49283800d09aa5684c43e690f4edfcbc777711f19"} Feb 04 11:48:54 crc kubenswrapper[4728]: I0204 11:48:54.815663 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8a68003-f2e4-4e86-9b51-9d6368701cf7","Type":"ContainerStarted","Data":"537412063a1df720a4ae9052c735633ceb1de1d9b197b58fe1d743d0ee2b864a"} Feb 04 11:48:54 crc kubenswrapper[4728]: I0204 11:48:54.834090 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.834070662 podStartE2EDuration="1.834070662s" podCreationTimestamp="2026-02-04 11:48:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:54.830520488 +0000 UTC m=+1283.973224873" watchObservedRunningTime="2026-02-04 11:48:54.834070662 +0000 UTC m=+1283.976775047" Feb 04 11:48:54 crc kubenswrapper[4728]: I0204 11:48:54.850716 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.850694196 podStartE2EDuration="1.850694196s" podCreationTimestamp="2026-02-04 11:48:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:54.846341702 +0000 UTC m=+1283.989046087" watchObservedRunningTime="2026-02-04 11:48:54.850694196 +0000 UTC m=+1283.993398581" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.027395 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.027876 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.028160 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.028431 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.031429 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.033308 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.053037 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.054449 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.058832 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.232741 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw"] Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.240094 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.255438 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw"] Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.335807 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.335851 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.335891 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.336061 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5shr\" (UniqueName: \"kubernetes.io/projected/1c65630e-c4d5-43d3-89c5-7e5a62951230-kube-api-access-r5shr\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.336324 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-config\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.336362 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.438507 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.438654 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.438682 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.438730 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.438807 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5shr\" (UniqueName: \"kubernetes.io/projected/1c65630e-c4d5-43d3-89c5-7e5a62951230-kube-api-access-r5shr\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.439739 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.439801 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.439806 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.440115 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-config\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.440398 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.440837 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-config\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.473644 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5shr\" (UniqueName: \"kubernetes.io/projected/1c65630e-c4d5-43d3-89c5-7e5a62951230-kube-api-access-r5shr\") pod \"dnsmasq-dns-6b7bbf7cf9-qzfrw\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.569904 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:55 crc kubenswrapper[4728]: I0204 11:48:55.837552 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 04 11:48:56 crc kubenswrapper[4728]: I0204 11:48:56.042451 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw"] Feb 04 11:48:56 crc kubenswrapper[4728]: I0204 11:48:56.832869 4728 generic.go:334] "Generic (PLEG): container finished" podID="1c65630e-c4d5-43d3-89c5-7e5a62951230" containerID="d44a16edb96af2c073cfa33aa3b0c24ecedc20a529125da10e5892ae9798d6b7" exitCode=0 Feb 04 11:48:56 crc kubenswrapper[4728]: I0204 11:48:56.832957 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" event={"ID":"1c65630e-c4d5-43d3-89c5-7e5a62951230","Type":"ContainerDied","Data":"d44a16edb96af2c073cfa33aa3b0c24ecedc20a529125da10e5892ae9798d6b7"} Feb 04 11:48:56 crc kubenswrapper[4728]: I0204 11:48:56.833240 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" event={"ID":"1c65630e-c4d5-43d3-89c5-7e5a62951230","Type":"ContainerStarted","Data":"8a0c1c6410499fdcb20f6629bdf1942855b92c34f1e14a45cfc0961c789aa8b6"} Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.513338 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.536562 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.536969 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="proxy-httpd" containerID="cri-o://1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd" gracePeriod=30 Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.536988 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="sg-core" containerID="cri-o://63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7" gracePeriod=30 Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.537068 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="ceilometer-notification-agent" containerID="cri-o://ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b" gracePeriod=30 Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.537289 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="ceilometer-central-agent" containerID="cri-o://977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953" gracePeriod=30 Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.548115 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.844879 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" event={"ID":"1c65630e-c4d5-43d3-89c5-7e5a62951230","Type":"ContainerStarted","Data":"00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d"} Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.844966 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.848941 4728 generic.go:334] "Generic (PLEG): container finished" podID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerID="1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd" exitCode=0 Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.848973 4728 generic.go:334] "Generic (PLEG): container finished" podID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerID="63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7" exitCode=2 Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.848984 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b3dbbb6-a809-492a-b6c4-42741f7d1b43","Type":"ContainerDied","Data":"1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd"} Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.849021 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b3dbbb6-a809-492a-b6c4-42741f7d1b43","Type":"ContainerDied","Data":"63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7"} Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.849177 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerName="nova-api-log" containerID="cri-o://fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2" gracePeriod=30 Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.849262 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerName="nova-api-api" containerID="cri-o://bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6" gracePeriod=30 Feb 04 11:48:57 crc kubenswrapper[4728]: I0204 11:48:57.866133 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" podStartSLOduration=2.866111573 podStartE2EDuration="2.866111573s" podCreationTimestamp="2026-02-04 11:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:48:57.863794948 +0000 UTC m=+1287.006499333" watchObservedRunningTime="2026-02-04 11:48:57.866111573 +0000 UTC m=+1287.008815958" Feb 04 11:48:58 crc kubenswrapper[4728]: I0204 11:48:58.565677 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:48:58 crc kubenswrapper[4728]: I0204 11:48:58.574027 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 04 11:48:58 crc kubenswrapper[4728]: I0204 11:48:58.858872 4728 generic.go:334] "Generic (PLEG): container finished" podID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerID="fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2" exitCode=143 Feb 04 11:48:58 crc kubenswrapper[4728]: I0204 11:48:58.858941 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee5b119d-5791-4812-9ac7-f86ed1d734c8","Type":"ContainerDied","Data":"fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2"} Feb 04 11:48:58 crc kubenswrapper[4728]: I0204 11:48:58.862688 4728 generic.go:334] "Generic (PLEG): container finished" podID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerID="977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953" exitCode=0 Feb 04 11:48:58 crc kubenswrapper[4728]: I0204 11:48:58.863719 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b3dbbb6-a809-492a-b6c4-42741f7d1b43","Type":"ContainerDied","Data":"977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953"} Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.527872 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.630095 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnd28\" (UniqueName: \"kubernetes.io/projected/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-kube-api-access-jnd28\") pod \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.630144 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-combined-ca-bundle\") pod \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.630371 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-scripts\") pod \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.630440 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-sg-core-conf-yaml\") pod \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.630466 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-log-httpd\") pod \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.630521 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-run-httpd\") pod \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.630566 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-config-data\") pod \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\" (UID: \"6b3dbbb6-a809-492a-b6c4-42741f7d1b43\") " Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.631090 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b3dbbb6-a809-492a-b6c4-42741f7d1b43" (UID: "6b3dbbb6-a809-492a-b6c4-42741f7d1b43"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.631238 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b3dbbb6-a809-492a-b6c4-42741f7d1b43" (UID: "6b3dbbb6-a809-492a-b6c4-42741f7d1b43"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.633250 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.633353 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.635317 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-kube-api-access-jnd28" (OuterVolumeSpecName: "kube-api-access-jnd28") pod "6b3dbbb6-a809-492a-b6c4-42741f7d1b43" (UID: "6b3dbbb6-a809-492a-b6c4-42741f7d1b43"). InnerVolumeSpecName "kube-api-access-jnd28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.636221 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-scripts" (OuterVolumeSpecName: "scripts") pod "6b3dbbb6-a809-492a-b6c4-42741f7d1b43" (UID: "6b3dbbb6-a809-492a-b6c4-42741f7d1b43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.665833 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b3dbbb6-a809-492a-b6c4-42741f7d1b43" (UID: "6b3dbbb6-a809-492a-b6c4-42741f7d1b43"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.731507 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b3dbbb6-a809-492a-b6c4-42741f7d1b43" (UID: "6b3dbbb6-a809-492a-b6c4-42741f7d1b43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.737114 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.737146 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.737162 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnd28\" (UniqueName: \"kubernetes.io/projected/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-kube-api-access-jnd28\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.737178 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.739481 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-config-data" (OuterVolumeSpecName: "config-data") pod "6b3dbbb6-a809-492a-b6c4-42741f7d1b43" (UID: "6b3dbbb6-a809-492a-b6c4-42741f7d1b43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.838599 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b3dbbb6-a809-492a-b6c4-42741f7d1b43-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.880886 4728 generic.go:334] "Generic (PLEG): container finished" podID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerID="ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b" exitCode=0 Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.881481 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b3dbbb6-a809-492a-b6c4-42741f7d1b43","Type":"ContainerDied","Data":"ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b"} Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.881520 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b3dbbb6-a809-492a-b6c4-42741f7d1b43","Type":"ContainerDied","Data":"3dce84795bb4facd8330177ba5cf97ec968bfb955657b79e677eabe6a623b6a4"} Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.881554 4728 scope.go:117] "RemoveContainer" containerID="1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.881800 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.908844 4728 scope.go:117] "RemoveContainer" containerID="63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.938001 4728 scope.go:117] "RemoveContainer" containerID="ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.945140 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.968245 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.972545 4728 scope.go:117] "RemoveContainer" containerID="977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.977689 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:59 crc kubenswrapper[4728]: E0204 11:48:59.978198 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="sg-core" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.978219 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="sg-core" Feb 04 11:48:59 crc kubenswrapper[4728]: E0204 11:48:59.978237 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="ceilometer-central-agent" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.978242 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="ceilometer-central-agent" Feb 04 11:48:59 crc kubenswrapper[4728]: E0204 11:48:59.978253 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="ceilometer-notification-agent" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.978259 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="ceilometer-notification-agent" Feb 04 11:48:59 crc kubenswrapper[4728]: E0204 11:48:59.978275 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="proxy-httpd" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.978280 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="proxy-httpd" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.978512 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="proxy-httpd" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.978529 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="sg-core" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.978543 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="ceilometer-notification-agent" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.978556 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" containerName="ceilometer-central-agent" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.980222 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.982965 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.983027 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.986318 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.998311 4728 scope.go:117] "RemoveContainer" containerID="1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd" Feb 04 11:48:59 crc kubenswrapper[4728]: E0204 11:48:59.998889 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd\": container with ID starting with 1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd not found: ID does not exist" containerID="1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.998930 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd"} err="failed to get container status \"1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd\": rpc error: code = NotFound desc = could not find container \"1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd\": container with ID starting with 1a0895f657649dad7645eafbfd9750f45d9281e1c46d29c08c50ad6ff4fb2bfd not found: ID does not exist" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.998957 4728 scope.go:117] "RemoveContainer" containerID="63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7" Feb 04 11:48:59 crc kubenswrapper[4728]: E0204 11:48:59.999191 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7\": container with ID starting with 63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7 not found: ID does not exist" containerID="63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.999208 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7"} err="failed to get container status \"63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7\": rpc error: code = NotFound desc = could not find container \"63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7\": container with ID starting with 63dce5c1714476721b7c11ba38abf7cf3d3c16b98b5d8c2d2b3f86ccc2463fe7 not found: ID does not exist" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.999220 4728 scope.go:117] "RemoveContainer" containerID="ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b" Feb 04 11:48:59 crc kubenswrapper[4728]: E0204 11:48:59.999478 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b\": container with ID starting with ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b not found: ID does not exist" containerID="ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.999494 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b"} err="failed to get container status \"ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b\": rpc error: code = NotFound desc = could not find container \"ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b\": container with ID starting with ddefb5c8adadc75cd2c67ea6ecc5bdb986df728bd575cdbde99e0ee0bb85470b not found: ID does not exist" Feb 04 11:48:59 crc kubenswrapper[4728]: I0204 11:48:59.999506 4728 scope.go:117] "RemoveContainer" containerID="977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953" Feb 04 11:49:00 crc kubenswrapper[4728]: E0204 11:49:00.001667 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953\": container with ID starting with 977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953 not found: ID does not exist" containerID="977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.001729 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953"} err="failed to get container status \"977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953\": rpc error: code = NotFound desc = could not find container \"977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953\": container with ID starting with 977fe91663fef74cd634f9f49e91c30b2a5a7d5b082427f055f2bbaaecf2b953 not found: ID does not exist" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.041385 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-log-httpd\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.041456 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.041483 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-scripts\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.041503 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-config-data\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.041530 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.041577 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkdh\" (UniqueName: \"kubernetes.io/projected/af36dea8-dd50-4419-9077-8832092343b5-kube-api-access-6jkdh\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.041663 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-run-httpd\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.142907 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-run-httpd\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.143179 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-log-httpd\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.143273 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.143347 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-scripts\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.143417 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-config-data\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.143488 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-log-httpd\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.143513 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.143669 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jkdh\" (UniqueName: \"kubernetes.io/projected/af36dea8-dd50-4419-9077-8832092343b5-kube-api-access-6jkdh\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.143444 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-run-httpd\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.147811 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.147935 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-scripts\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.149608 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.151154 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-config-data\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.160599 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jkdh\" (UniqueName: \"kubernetes.io/projected/af36dea8-dd50-4419-9077-8832092343b5-kube-api-access-6jkdh\") pod \"ceilometer-0\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.306810 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.744260 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:49:00 crc kubenswrapper[4728]: I0204 11:49:00.891914 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af36dea8-dd50-4419-9077-8832092343b5","Type":"ContainerStarted","Data":"92494c5f5b0fde69c3c4d731c3bf6ec439d459291ac24ab2d8d25c9669921483"} Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.481411 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.574446 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b3dbbb6-a809-492a-b6c4-42741f7d1b43" path="/var/lib/kubelet/pods/6b3dbbb6-a809-492a-b6c4-42741f7d1b43/volumes" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.673579 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee5b119d-5791-4812-9ac7-f86ed1d734c8-logs\") pod \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.673625 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-combined-ca-bundle\") pod \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.673837 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qjsw\" (UniqueName: \"kubernetes.io/projected/ee5b119d-5791-4812-9ac7-f86ed1d734c8-kube-api-access-4qjsw\") pod \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.673882 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-config-data\") pod \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\" (UID: \"ee5b119d-5791-4812-9ac7-f86ed1d734c8\") " Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.674021 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5b119d-5791-4812-9ac7-f86ed1d734c8-logs" (OuterVolumeSpecName: "logs") pod "ee5b119d-5791-4812-9ac7-f86ed1d734c8" (UID: "ee5b119d-5791-4812-9ac7-f86ed1d734c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.674353 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee5b119d-5791-4812-9ac7-f86ed1d734c8-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.679157 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5b119d-5791-4812-9ac7-f86ed1d734c8-kube-api-access-4qjsw" (OuterVolumeSpecName: "kube-api-access-4qjsw") pod "ee5b119d-5791-4812-9ac7-f86ed1d734c8" (UID: "ee5b119d-5791-4812-9ac7-f86ed1d734c8"). InnerVolumeSpecName "kube-api-access-4qjsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.709895 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee5b119d-5791-4812-9ac7-f86ed1d734c8" (UID: "ee5b119d-5791-4812-9ac7-f86ed1d734c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.709997 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-config-data" (OuterVolumeSpecName: "config-data") pod "ee5b119d-5791-4812-9ac7-f86ed1d734c8" (UID: "ee5b119d-5791-4812-9ac7-f86ed1d734c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.776381 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qjsw\" (UniqueName: \"kubernetes.io/projected/ee5b119d-5791-4812-9ac7-f86ed1d734c8-kube-api-access-4qjsw\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.776422 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.776436 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee5b119d-5791-4812-9ac7-f86ed1d734c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.903432 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af36dea8-dd50-4419-9077-8832092343b5","Type":"ContainerStarted","Data":"d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c"} Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.905329 4728 generic.go:334] "Generic (PLEG): container finished" podID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerID="bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6" exitCode=0 Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.905372 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee5b119d-5791-4812-9ac7-f86ed1d734c8","Type":"ContainerDied","Data":"bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6"} Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.905397 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee5b119d-5791-4812-9ac7-f86ed1d734c8","Type":"ContainerDied","Data":"1a7974f3a014572cf79125426cef70584fc8f2e12a59878f40822977fdde0331"} Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.905415 4728 scope.go:117] "RemoveContainer" containerID="bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.905431 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.929323 4728 scope.go:117] "RemoveContainer" containerID="fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.962452 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.966278 4728 scope.go:117] "RemoveContainer" containerID="bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6" Feb 04 11:49:01 crc kubenswrapper[4728]: E0204 11:49:01.967187 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6\": container with ID starting with bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6 not found: ID does not exist" containerID="bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.967241 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6"} err="failed to get container status \"bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6\": rpc error: code = NotFound desc = could not find container \"bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6\": container with ID starting with bd667acc42fb5e45535f1bb4db6823a4a54e09ac4c7793fe7248ed233cc200a6 not found: ID does not exist" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.967281 4728 scope.go:117] "RemoveContainer" containerID="fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2" Feb 04 11:49:01 crc kubenswrapper[4728]: E0204 11:49:01.967634 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2\": container with ID starting with fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2 not found: ID does not exist" containerID="fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.967669 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2"} err="failed to get container status \"fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2\": rpc error: code = NotFound desc = could not find container \"fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2\": container with ID starting with fecbc483a210ee5d90caefce1c5f467942bcfd1bd14ec45b2e9541ec9104daf2 not found: ID does not exist" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.974118 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.989852 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 04 11:49:01 crc kubenswrapper[4728]: E0204 11:49:01.990276 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerName="nova-api-log" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.990292 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerName="nova-api-log" Feb 04 11:49:01 crc kubenswrapper[4728]: E0204 11:49:01.990311 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerName="nova-api-api" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.990316 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerName="nova-api-api" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.990511 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerName="nova-api-api" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.990528 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" containerName="nova-api-log" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.991694 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.994100 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.994316 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 04 11:49:01 crc kubenswrapper[4728]: I0204 11:49:01.996130 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.038245 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.082504 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-public-tls-certs\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.082852 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82pm7\" (UniqueName: \"kubernetes.io/projected/df9ea125-5b52-4638-830a-642e1edeacda-kube-api-access-82pm7\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.082947 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df9ea125-5b52-4638-830a-642e1edeacda-logs\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.083117 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-config-data\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.083223 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-internal-tls-certs\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.083598 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.185512 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82pm7\" (UniqueName: \"kubernetes.io/projected/df9ea125-5b52-4638-830a-642e1edeacda-kube-api-access-82pm7\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.185562 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df9ea125-5b52-4638-830a-642e1edeacda-logs\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.185673 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-config-data\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.185699 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-internal-tls-certs\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.185737 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.185823 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-public-tls-certs\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.186092 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df9ea125-5b52-4638-830a-642e1edeacda-logs\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.191220 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-internal-tls-certs\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.192083 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-config-data\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.199575 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-public-tls-certs\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.202893 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.203513 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82pm7\" (UniqueName: \"kubernetes.io/projected/df9ea125-5b52-4638-830a-642e1edeacda-kube-api-access-82pm7\") pod \"nova-api-0\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.324339 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.790072 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.921838 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af36dea8-dd50-4419-9077-8832092343b5","Type":"ContainerStarted","Data":"b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db"} Feb 04 11:49:02 crc kubenswrapper[4728]: I0204 11:49:02.926957 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df9ea125-5b52-4638-830a-642e1edeacda","Type":"ContainerStarted","Data":"da5d8223bee83f47ed2341fdad6747a562ead4d93ea3e3986aa3ca3dd5136a7e"} Feb 04 11:49:03 crc kubenswrapper[4728]: I0204 11:49:03.573669 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee5b119d-5791-4812-9ac7-f86ed1d734c8" path="/var/lib/kubelet/pods/ee5b119d-5791-4812-9ac7-f86ed1d734c8/volumes" Feb 04 11:49:03 crc kubenswrapper[4728]: I0204 11:49:03.575203 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:49:03 crc kubenswrapper[4728]: I0204 11:49:03.575331 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 04 11:49:03 crc kubenswrapper[4728]: I0204 11:49:03.593184 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:49:03 crc kubenswrapper[4728]: I0204 11:49:03.616423 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 04 11:49:03 crc kubenswrapper[4728]: I0204 11:49:03.943001 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af36dea8-dd50-4419-9077-8832092343b5","Type":"ContainerStarted","Data":"8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96"} Feb 04 11:49:03 crc kubenswrapper[4728]: I0204 11:49:03.945971 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df9ea125-5b52-4638-830a-642e1edeacda","Type":"ContainerStarted","Data":"a908af24ecb02b4fb391deca169fde709052ef74f63d4438c74a6d9699af300d"} Feb 04 11:49:03 crc kubenswrapper[4728]: I0204 11:49:03.946009 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df9ea125-5b52-4638-830a-642e1edeacda","Type":"ContainerStarted","Data":"7d4a15fc2cc0141be955db843e9bbbe7c1548f34593515af47bf79c189e1744a"} Feb 04 11:49:03 crc kubenswrapper[4728]: I0204 11:49:03.974878 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.97485918 podStartE2EDuration="2.97485918s" podCreationTimestamp="2026-02-04 11:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:49:03.967007275 +0000 UTC m=+1293.109711660" watchObservedRunningTime="2026-02-04 11:49:03.97485918 +0000 UTC m=+1293.117563555" Feb 04 11:49:03 crc kubenswrapper[4728]: I0204 11:49:03.978593 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 04 11:49:03 crc kubenswrapper[4728]: I0204 11:49:03.986091 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.249768 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6blvv"] Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.251055 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.255144 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.259018 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.277127 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6blvv"] Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.429100 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.429382 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwnt6\" (UniqueName: \"kubernetes.io/projected/47678b2a-6ab4-4150-b1b8-091d4e500d2e-kube-api-access-wwnt6\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.429437 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-config-data\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.429511 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-scripts\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.531449 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwnt6\" (UniqueName: \"kubernetes.io/projected/47678b2a-6ab4-4150-b1b8-091d4e500d2e-kube-api-access-wwnt6\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.531545 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-config-data\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.531604 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-scripts\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.531773 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.537067 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-config-data\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.537193 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.537420 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-scripts\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.551703 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwnt6\" (UniqueName: \"kubernetes.io/projected/47678b2a-6ab4-4150-b1b8-091d4e500d2e-kube-api-access-wwnt6\") pod \"nova-cell1-cell-mapping-6blvv\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:04 crc kubenswrapper[4728]: I0204 11:49:04.570907 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:05 crc kubenswrapper[4728]: I0204 11:49:05.034870 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6blvv"] Feb 04 11:49:05 crc kubenswrapper[4728]: I0204 11:49:05.448410 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:49:05 crc kubenswrapper[4728]: I0204 11:49:05.448732 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:49:05 crc kubenswrapper[4728]: I0204 11:49:05.570930 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:49:05 crc kubenswrapper[4728]: I0204 11:49:05.642409 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-r2b52"] Feb 04 11:49:05 crc kubenswrapper[4728]: I0204 11:49:05.642783 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" podUID="2dc28b54-9e03-4356-9586-198cbebe01bc" containerName="dnsmasq-dns" containerID="cri-o://088d238541b85a306b36fef383dc8808b5ad9d1a0ebb5a4d8b3a623ff9ddcc90" gracePeriod=10 Feb 04 11:49:05 crc kubenswrapper[4728]: I0204 11:49:05.973933 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6blvv" event={"ID":"47678b2a-6ab4-4150-b1b8-091d4e500d2e","Type":"ContainerStarted","Data":"075327945d66de278c1c9961e1390d4fdf1c1e23fe3e99e878c1a99557a3b814"} Feb 04 11:49:05 crc kubenswrapper[4728]: I0204 11:49:05.974167 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6blvv" event={"ID":"47678b2a-6ab4-4150-b1b8-091d4e500d2e","Type":"ContainerStarted","Data":"dd49272aa27a73095278d1390e50fe571b753cab562d9034fc4bb83f1e945de8"} Feb 04 11:49:05 crc kubenswrapper[4728]: I0204 11:49:05.982764 4728 generic.go:334] "Generic (PLEG): container finished" podID="2dc28b54-9e03-4356-9586-198cbebe01bc" containerID="088d238541b85a306b36fef383dc8808b5ad9d1a0ebb5a4d8b3a623ff9ddcc90" exitCode=0 Feb 04 11:49:05 crc kubenswrapper[4728]: I0204 11:49:05.982805 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" event={"ID":"2dc28b54-9e03-4356-9586-198cbebe01bc","Type":"ContainerDied","Data":"088d238541b85a306b36fef383dc8808b5ad9d1a0ebb5a4d8b3a623ff9ddcc90"} Feb 04 11:49:05 crc kubenswrapper[4728]: I0204 11:49:05.995151 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6blvv" podStartSLOduration=1.9951323109999999 podStartE2EDuration="1.995132311s" podCreationTimestamp="2026-02-04 11:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:49:05.987632404 +0000 UTC m=+1295.130336789" watchObservedRunningTime="2026-02-04 11:49:05.995132311 +0000 UTC m=+1295.137836696" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.216401 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.371689 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-swift-storage-0\") pod \"2dc28b54-9e03-4356-9586-198cbebe01bc\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.372681 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhfxf\" (UniqueName: \"kubernetes.io/projected/2dc28b54-9e03-4356-9586-198cbebe01bc-kube-api-access-mhfxf\") pod \"2dc28b54-9e03-4356-9586-198cbebe01bc\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.372822 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-sb\") pod \"2dc28b54-9e03-4356-9586-198cbebe01bc\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.372956 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-nb\") pod \"2dc28b54-9e03-4356-9586-198cbebe01bc\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.373133 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-config\") pod \"2dc28b54-9e03-4356-9586-198cbebe01bc\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.373338 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-svc\") pod \"2dc28b54-9e03-4356-9586-198cbebe01bc\" (UID: \"2dc28b54-9e03-4356-9586-198cbebe01bc\") " Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.376960 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc28b54-9e03-4356-9586-198cbebe01bc-kube-api-access-mhfxf" (OuterVolumeSpecName: "kube-api-access-mhfxf") pod "2dc28b54-9e03-4356-9586-198cbebe01bc" (UID: "2dc28b54-9e03-4356-9586-198cbebe01bc"). InnerVolumeSpecName "kube-api-access-mhfxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.424266 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2dc28b54-9e03-4356-9586-198cbebe01bc" (UID: "2dc28b54-9e03-4356-9586-198cbebe01bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.426211 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2dc28b54-9e03-4356-9586-198cbebe01bc" (UID: "2dc28b54-9e03-4356-9586-198cbebe01bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.437902 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-config" (OuterVolumeSpecName: "config") pod "2dc28b54-9e03-4356-9586-198cbebe01bc" (UID: "2dc28b54-9e03-4356-9586-198cbebe01bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.445339 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2dc28b54-9e03-4356-9586-198cbebe01bc" (UID: "2dc28b54-9e03-4356-9586-198cbebe01bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.448374 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2dc28b54-9e03-4356-9586-198cbebe01bc" (UID: "2dc28b54-9e03-4356-9586-198cbebe01bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.475580 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhfxf\" (UniqueName: \"kubernetes.io/projected/2dc28b54-9e03-4356-9586-198cbebe01bc-kube-api-access-mhfxf\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.475611 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.475622 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.475632 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.475640 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.475649 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dc28b54-9e03-4356-9586-198cbebe01bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:06 crc kubenswrapper[4728]: I0204 11:49:06.998938 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af36dea8-dd50-4419-9077-8832092343b5","Type":"ContainerStarted","Data":"5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc"} Feb 04 11:49:07 crc kubenswrapper[4728]: I0204 11:49:07.000379 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 11:49:07 crc kubenswrapper[4728]: I0204 11:49:07.005247 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" event={"ID":"2dc28b54-9e03-4356-9586-198cbebe01bc","Type":"ContainerDied","Data":"a085fbbecdb504e04320f80d5eb472c0ec63726e43bf89cc3fb10e4cf118d26a"} Feb 04 11:49:07 crc kubenswrapper[4728]: I0204 11:49:07.005336 4728 scope.go:117] "RemoveContainer" containerID="088d238541b85a306b36fef383dc8808b5ad9d1a0ebb5a4d8b3a623ff9ddcc90" Feb 04 11:49:07 crc kubenswrapper[4728]: I0204 11:49:07.005356 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-r2b52" Feb 04 11:49:07 crc kubenswrapper[4728]: I0204 11:49:07.041968 4728 scope.go:117] "RemoveContainer" containerID="b86268580ce94c2fc44a1665453d757a90aa363e09745db1af521b730f1dfb55" Feb 04 11:49:07 crc kubenswrapper[4728]: I0204 11:49:07.042116 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.939360979 podStartE2EDuration="8.042106883s" podCreationTimestamp="2026-02-04 11:48:59 +0000 UTC" firstStartedPulling="2026-02-04 11:49:00.749557221 +0000 UTC m=+1289.892261606" lastFinishedPulling="2026-02-04 11:49:05.852303135 +0000 UTC m=+1294.995007510" observedRunningTime="2026-02-04 11:49:07.029538976 +0000 UTC m=+1296.172243371" watchObservedRunningTime="2026-02-04 11:49:07.042106883 +0000 UTC m=+1296.184811268" Feb 04 11:49:07 crc kubenswrapper[4728]: I0204 11:49:07.070833 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-r2b52"] Feb 04 11:49:07 crc kubenswrapper[4728]: I0204 11:49:07.083144 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-r2b52"] Feb 04 11:49:07 crc kubenswrapper[4728]: I0204 11:49:07.565039 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc28b54-9e03-4356-9586-198cbebe01bc" path="/var/lib/kubelet/pods/2dc28b54-9e03-4356-9586-198cbebe01bc/volumes" Feb 04 11:49:11 crc kubenswrapper[4728]: I0204 11:49:11.045669 4728 generic.go:334] "Generic (PLEG): container finished" podID="47678b2a-6ab4-4150-b1b8-091d4e500d2e" containerID="075327945d66de278c1c9961e1390d4fdf1c1e23fe3e99e878c1a99557a3b814" exitCode=0 Feb 04 11:49:11 crc kubenswrapper[4728]: I0204 11:49:11.045810 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6blvv" event={"ID":"47678b2a-6ab4-4150-b1b8-091d4e500d2e","Type":"ContainerDied","Data":"075327945d66de278c1c9961e1390d4fdf1c1e23fe3e99e878c1a99557a3b814"} Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.330405 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.330716 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.415088 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.583269 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwnt6\" (UniqueName: \"kubernetes.io/projected/47678b2a-6ab4-4150-b1b8-091d4e500d2e-kube-api-access-wwnt6\") pod \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.583427 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-scripts\") pod \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.583511 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-combined-ca-bundle\") pod \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.583538 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-config-data\") pod \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\" (UID: \"47678b2a-6ab4-4150-b1b8-091d4e500d2e\") " Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.589705 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-scripts" (OuterVolumeSpecName: "scripts") pod "47678b2a-6ab4-4150-b1b8-091d4e500d2e" (UID: "47678b2a-6ab4-4150-b1b8-091d4e500d2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.590016 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47678b2a-6ab4-4150-b1b8-091d4e500d2e-kube-api-access-wwnt6" (OuterVolumeSpecName: "kube-api-access-wwnt6") pod "47678b2a-6ab4-4150-b1b8-091d4e500d2e" (UID: "47678b2a-6ab4-4150-b1b8-091d4e500d2e"). InnerVolumeSpecName "kube-api-access-wwnt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.620001 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47678b2a-6ab4-4150-b1b8-091d4e500d2e" (UID: "47678b2a-6ab4-4150-b1b8-091d4e500d2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.638920 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-config-data" (OuterVolumeSpecName: "config-data") pod "47678b2a-6ab4-4150-b1b8-091d4e500d2e" (UID: "47678b2a-6ab4-4150-b1b8-091d4e500d2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.685971 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.686012 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.686023 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47678b2a-6ab4-4150-b1b8-091d4e500d2e-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:12 crc kubenswrapper[4728]: I0204 11:49:12.686032 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwnt6\" (UniqueName: \"kubernetes.io/projected/47678b2a-6ab4-4150-b1b8-091d4e500d2e-kube-api-access-wwnt6\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.069933 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6blvv" event={"ID":"47678b2a-6ab4-4150-b1b8-091d4e500d2e","Type":"ContainerDied","Data":"dd49272aa27a73095278d1390e50fe571b753cab562d9034fc4bb83f1e945de8"} Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.069971 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd49272aa27a73095278d1390e50fe571b753cab562d9034fc4bb83f1e945de8" Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.070021 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6blvv" Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.264401 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.265089 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df9ea125-5b52-4638-830a-642e1edeacda" containerName="nova-api-api" containerID="cri-o://a908af24ecb02b4fb391deca169fde709052ef74f63d4438c74a6d9699af300d" gracePeriod=30 Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.265393 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df9ea125-5b52-4638-830a-642e1edeacda" containerName="nova-api-log" containerID="cri-o://7d4a15fc2cc0141be955db843e9bbbe7c1548f34593515af47bf79c189e1744a" gracePeriod=30 Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.277700 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.277806 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="df9ea125-5b52-4638-830a-642e1edeacda" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.213:8774/\": EOF" Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.277971 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d8a68003-f2e4-4e86-9b51-9d6368701cf7" containerName="nova-scheduler-scheduler" containerID="cri-o://ff5d66c0625af6ed41a623b49283800d09aa5684c43e690f4edfcbc777711f19" gracePeriod=30 Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.278603 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="df9ea125-5b52-4638-830a-642e1edeacda" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.213:8774/\": EOF" Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.302632 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.302903 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-log" containerID="cri-o://1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36" gracePeriod=30 Feb 04 11:49:13 crc kubenswrapper[4728]: I0204 11:49:13.303026 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-metadata" containerID="cri-o://9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad" gracePeriod=30 Feb 04 11:49:13 crc kubenswrapper[4728]: E0204 11:49:13.575152 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ff5d66c0625af6ed41a623b49283800d09aa5684c43e690f4edfcbc777711f19" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 04 11:49:13 crc kubenswrapper[4728]: E0204 11:49:13.576907 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ff5d66c0625af6ed41a623b49283800d09aa5684c43e690f4edfcbc777711f19" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 04 11:49:13 crc kubenswrapper[4728]: E0204 11:49:13.578722 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ff5d66c0625af6ed41a623b49283800d09aa5684c43e690f4edfcbc777711f19" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 04 11:49:13 crc kubenswrapper[4728]: E0204 11:49:13.578781 4728 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d8a68003-f2e4-4e86-9b51-9d6368701cf7" containerName="nova-scheduler-scheduler" Feb 04 11:49:14 crc kubenswrapper[4728]: I0204 11:49:14.083648 4728 generic.go:334] "Generic (PLEG): container finished" podID="df9ea125-5b52-4638-830a-642e1edeacda" containerID="7d4a15fc2cc0141be955db843e9bbbe7c1548f34593515af47bf79c189e1744a" exitCode=143 Feb 04 11:49:14 crc kubenswrapper[4728]: I0204 11:49:14.083723 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df9ea125-5b52-4638-830a-642e1edeacda","Type":"ContainerDied","Data":"7d4a15fc2cc0141be955db843e9bbbe7c1548f34593515af47bf79c189e1744a"} Feb 04 11:49:14 crc kubenswrapper[4728]: I0204 11:49:14.085798 4728 generic.go:334] "Generic (PLEG): container finished" podID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerID="1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36" exitCode=143 Feb 04 11:49:14 crc kubenswrapper[4728]: I0204 11:49:14.085824 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f225cf5-e1d8-4c10-893b-a497f9959caf","Type":"ContainerDied","Data":"1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36"} Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.120325 4728 generic.go:334] "Generic (PLEG): container finished" podID="d8a68003-f2e4-4e86-9b51-9d6368701cf7" containerID="ff5d66c0625af6ed41a623b49283800d09aa5684c43e690f4edfcbc777711f19" exitCode=0 Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.120440 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8a68003-f2e4-4e86-9b51-9d6368701cf7","Type":"ContainerDied","Data":"ff5d66c0625af6ed41a623b49283800d09aa5684c43e690f4edfcbc777711f19"} Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.455973 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:53360->10.217.0.208:8775: read: connection reset by peer" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.455973 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:53354->10.217.0.208:8775: read: connection reset by peer" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.640570 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.662618 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-combined-ca-bundle\") pod \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.662787 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-config-data\") pod \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.662872 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbksg\" (UniqueName: \"kubernetes.io/projected/d8a68003-f2e4-4e86-9b51-9d6368701cf7-kube-api-access-gbksg\") pod \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\" (UID: \"d8a68003-f2e4-4e86-9b51-9d6368701cf7\") " Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.670167 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a68003-f2e4-4e86-9b51-9d6368701cf7-kube-api-access-gbksg" (OuterVolumeSpecName: "kube-api-access-gbksg") pod "d8a68003-f2e4-4e86-9b51-9d6368701cf7" (UID: "d8a68003-f2e4-4e86-9b51-9d6368701cf7"). InnerVolumeSpecName "kube-api-access-gbksg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.707320 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-config-data" (OuterVolumeSpecName: "config-data") pod "d8a68003-f2e4-4e86-9b51-9d6368701cf7" (UID: "d8a68003-f2e4-4e86-9b51-9d6368701cf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.709408 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8a68003-f2e4-4e86-9b51-9d6368701cf7" (UID: "d8a68003-f2e4-4e86-9b51-9d6368701cf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.764933 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.764972 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8a68003-f2e4-4e86-9b51-9d6368701cf7-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.764985 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbksg\" (UniqueName: \"kubernetes.io/projected/d8a68003-f2e4-4e86-9b51-9d6368701cf7-kube-api-access-gbksg\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.810472 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.866138 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-combined-ca-bundle\") pod \"3f225cf5-e1d8-4c10-893b-a497f9959caf\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.866202 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-config-data\") pod \"3f225cf5-e1d8-4c10-893b-a497f9959caf\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.866283 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-nova-metadata-tls-certs\") pod \"3f225cf5-e1d8-4c10-893b-a497f9959caf\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.866363 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bngdz\" (UniqueName: \"kubernetes.io/projected/3f225cf5-e1d8-4c10-893b-a497f9959caf-kube-api-access-bngdz\") pod \"3f225cf5-e1d8-4c10-893b-a497f9959caf\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.867233 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f225cf5-e1d8-4c10-893b-a497f9959caf-logs\") pod \"3f225cf5-e1d8-4c10-893b-a497f9959caf\" (UID: \"3f225cf5-e1d8-4c10-893b-a497f9959caf\") " Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.868186 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f225cf5-e1d8-4c10-893b-a497f9959caf-logs" (OuterVolumeSpecName: "logs") pod "3f225cf5-e1d8-4c10-893b-a497f9959caf" (UID: "3f225cf5-e1d8-4c10-893b-a497f9959caf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.871966 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f225cf5-e1d8-4c10-893b-a497f9959caf-kube-api-access-bngdz" (OuterVolumeSpecName: "kube-api-access-bngdz") pod "3f225cf5-e1d8-4c10-893b-a497f9959caf" (UID: "3f225cf5-e1d8-4c10-893b-a497f9959caf"). InnerVolumeSpecName "kube-api-access-bngdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.902539 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-config-data" (OuterVolumeSpecName: "config-data") pod "3f225cf5-e1d8-4c10-893b-a497f9959caf" (UID: "3f225cf5-e1d8-4c10-893b-a497f9959caf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.905121 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f225cf5-e1d8-4c10-893b-a497f9959caf" (UID: "3f225cf5-e1d8-4c10-893b-a497f9959caf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.930511 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3f225cf5-e1d8-4c10-893b-a497f9959caf" (UID: "3f225cf5-e1d8-4c10-893b-a497f9959caf"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.969364 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.969410 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.969424 4728 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f225cf5-e1d8-4c10-893b-a497f9959caf-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.969438 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bngdz\" (UniqueName: \"kubernetes.io/projected/3f225cf5-e1d8-4c10-893b-a497f9959caf-kube-api-access-bngdz\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:16 crc kubenswrapper[4728]: I0204 11:49:16.969450 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f225cf5-e1d8-4c10-893b-a497f9959caf-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.133702 4728 generic.go:334] "Generic (PLEG): container finished" podID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerID="9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad" exitCode=0 Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.133791 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.133815 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f225cf5-e1d8-4c10-893b-a497f9959caf","Type":"ContainerDied","Data":"9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad"} Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.133850 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f225cf5-e1d8-4c10-893b-a497f9959caf","Type":"ContainerDied","Data":"cd9d9eacc9d17980711c3f3b0f6924fc750c67e56b614730faf9fb811c6cde9b"} Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.133869 4728 scope.go:117] "RemoveContainer" containerID="9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.138473 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8a68003-f2e4-4e86-9b51-9d6368701cf7","Type":"ContainerDied","Data":"537412063a1df720a4ae9052c735633ceb1de1d9b197b58fe1d743d0ee2b864a"} Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.138504 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.190652 4728 scope.go:117] "RemoveContainer" containerID="1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.197132 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.214655 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.238110 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.238593 4728 scope.go:117] "RemoveContainer" containerID="9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad" Feb 04 11:49:17 crc kubenswrapper[4728]: E0204 11:49:17.239135 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad\": container with ID starting with 9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad not found: ID does not exist" containerID="9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.239173 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad"} err="failed to get container status \"9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad\": rpc error: code = NotFound desc = could not find container \"9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad\": container with ID starting with 9f856c15cfa327b407cb3a9b0bee15cfc42581c9b0456d5c122338d3078cbcad not found: ID does not exist" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.239219 4728 scope.go:117] "RemoveContainer" containerID="1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36" Feb 04 11:49:17 crc kubenswrapper[4728]: E0204 11:49:17.239561 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36\": container with ID starting with 1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36 not found: ID does not exist" containerID="1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.239595 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36"} err="failed to get container status \"1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36\": rpc error: code = NotFound desc = could not find container \"1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36\": container with ID starting with 1f5b63fa4e0800f3a7a9692c44f7d0aa34cb71483e928964addff15d4514ba36 not found: ID does not exist" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.239622 4728 scope.go:117] "RemoveContainer" containerID="ff5d66c0625af6ed41a623b49283800d09aa5684c43e690f4edfcbc777711f19" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.265858 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.281438 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:49:17 crc kubenswrapper[4728]: E0204 11:49:17.281901 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc28b54-9e03-4356-9586-198cbebe01bc" containerName="dnsmasq-dns" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.281927 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc28b54-9e03-4356-9586-198cbebe01bc" containerName="dnsmasq-dns" Feb 04 11:49:17 crc kubenswrapper[4728]: E0204 11:49:17.281938 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47678b2a-6ab4-4150-b1b8-091d4e500d2e" containerName="nova-manage" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.281946 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="47678b2a-6ab4-4150-b1b8-091d4e500d2e" containerName="nova-manage" Feb 04 11:49:17 crc kubenswrapper[4728]: E0204 11:49:17.281962 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc28b54-9e03-4356-9586-198cbebe01bc" containerName="init" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.281970 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc28b54-9e03-4356-9586-198cbebe01bc" containerName="init" Feb 04 11:49:17 crc kubenswrapper[4728]: E0204 11:49:17.281979 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-log" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.281986 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-log" Feb 04 11:49:17 crc kubenswrapper[4728]: E0204 11:49:17.281999 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a68003-f2e4-4e86-9b51-9d6368701cf7" containerName="nova-scheduler-scheduler" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.282007 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a68003-f2e4-4e86-9b51-9d6368701cf7" containerName="nova-scheduler-scheduler" Feb 04 11:49:17 crc kubenswrapper[4728]: E0204 11:49:17.282023 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-metadata" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.282030 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-metadata" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.282269 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a68003-f2e4-4e86-9b51-9d6368701cf7" containerName="nova-scheduler-scheduler" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.282286 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-log" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.282306 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc28b54-9e03-4356-9586-198cbebe01bc" containerName="dnsmasq-dns" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.282323 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" containerName="nova-metadata-metadata" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.282340 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="47678b2a-6ab4-4150-b1b8-091d4e500d2e" containerName="nova-manage" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.283878 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.285744 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.286377 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.292565 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.294231 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.296106 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.302499 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.312004 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.380225 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-config-data\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.380302 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n87qn\" (UniqueName: \"kubernetes.io/projected/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-kube-api-access-n87qn\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.380349 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8hzz\" (UniqueName: \"kubernetes.io/projected/36828b26-ee22-4926-82ba-21d3c7be7f6d-kube-api-access-k8hzz\") pod \"nova-scheduler-0\" (UID: \"36828b26-ee22-4926-82ba-21d3c7be7f6d\") " pod="openstack/nova-scheduler-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.380412 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36828b26-ee22-4926-82ba-21d3c7be7f6d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36828b26-ee22-4926-82ba-21d3c7be7f6d\") " pod="openstack/nova-scheduler-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.380471 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.380495 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-logs\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.380569 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.380627 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36828b26-ee22-4926-82ba-21d3c7be7f6d-config-data\") pod \"nova-scheduler-0\" (UID: \"36828b26-ee22-4926-82ba-21d3c7be7f6d\") " pod="openstack/nova-scheduler-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.482010 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.482068 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-logs\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.482110 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.482163 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36828b26-ee22-4926-82ba-21d3c7be7f6d-config-data\") pod \"nova-scheduler-0\" (UID: \"36828b26-ee22-4926-82ba-21d3c7be7f6d\") " pod="openstack/nova-scheduler-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.482227 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-config-data\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.482279 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n87qn\" (UniqueName: \"kubernetes.io/projected/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-kube-api-access-n87qn\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.482318 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8hzz\" (UniqueName: \"kubernetes.io/projected/36828b26-ee22-4926-82ba-21d3c7be7f6d-kube-api-access-k8hzz\") pod \"nova-scheduler-0\" (UID: \"36828b26-ee22-4926-82ba-21d3c7be7f6d\") " pod="openstack/nova-scheduler-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.482365 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36828b26-ee22-4926-82ba-21d3c7be7f6d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36828b26-ee22-4926-82ba-21d3c7be7f6d\") " pod="openstack/nova-scheduler-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.483883 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-logs\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.488683 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.490410 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36828b26-ee22-4926-82ba-21d3c7be7f6d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36828b26-ee22-4926-82ba-21d3c7be7f6d\") " pod="openstack/nova-scheduler-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.491482 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.492256 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-config-data\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.493782 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36828b26-ee22-4926-82ba-21d3c7be7f6d-config-data\") pod \"nova-scheduler-0\" (UID: \"36828b26-ee22-4926-82ba-21d3c7be7f6d\") " pod="openstack/nova-scheduler-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.500162 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n87qn\" (UniqueName: \"kubernetes.io/projected/8ea985e9-30fb-4e8e-8fd9-29c156245bfd-kube-api-access-n87qn\") pod \"nova-metadata-0\" (UID: \"8ea985e9-30fb-4e8e-8fd9-29c156245bfd\") " pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.502567 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8hzz\" (UniqueName: \"kubernetes.io/projected/36828b26-ee22-4926-82ba-21d3c7be7f6d-kube-api-access-k8hzz\") pod \"nova-scheduler-0\" (UID: \"36828b26-ee22-4926-82ba-21d3c7be7f6d\") " pod="openstack/nova-scheduler-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.564128 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f225cf5-e1d8-4c10-893b-a497f9959caf" path="/var/lib/kubelet/pods/3f225cf5-e1d8-4c10-893b-a497f9959caf/volumes" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.565224 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a68003-f2e4-4e86-9b51-9d6368701cf7" path="/var/lib/kubelet/pods/d8a68003-f2e4-4e86-9b51-9d6368701cf7/volumes" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.609124 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 04 11:49:17 crc kubenswrapper[4728]: I0204 11:49:17.623613 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 04 11:49:18 crc kubenswrapper[4728]: I0204 11:49:18.874714 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 04 11:49:18 crc kubenswrapper[4728]: W0204 11:49:18.887947 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ea985e9_30fb_4e8e_8fd9_29c156245bfd.slice/crio-516f75b7d74defdcb7a479b4d3e06b6fd2c74224f7ccee76fab6b19e2e77539b WatchSource:0}: Error finding container 516f75b7d74defdcb7a479b4d3e06b6fd2c74224f7ccee76fab6b19e2e77539b: Status 404 returned error can't find the container with id 516f75b7d74defdcb7a479b4d3e06b6fd2c74224f7ccee76fab6b19e2e77539b Feb 04 11:49:18 crc kubenswrapper[4728]: I0204 11:49:18.890230 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 04 11:49:18 crc kubenswrapper[4728]: W0204 11:49:18.892906 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36828b26_ee22_4926_82ba_21d3c7be7f6d.slice/crio-7c9488fb7e0cfb8d9cbcf13d856e5ec2ae7427cbd8174ec3b42a23534d5d8cce WatchSource:0}: Error finding container 7c9488fb7e0cfb8d9cbcf13d856e5ec2ae7427cbd8174ec3b42a23534d5d8cce: Status 404 returned error can't find the container with id 7c9488fb7e0cfb8d9cbcf13d856e5ec2ae7427cbd8174ec3b42a23534d5d8cce Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.160124 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36828b26-ee22-4926-82ba-21d3c7be7f6d","Type":"ContainerStarted","Data":"7c9488fb7e0cfb8d9cbcf13d856e5ec2ae7427cbd8174ec3b42a23534d5d8cce"} Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.162675 4728 generic.go:334] "Generic (PLEG): container finished" podID="df9ea125-5b52-4638-830a-642e1edeacda" containerID="a908af24ecb02b4fb391deca169fde709052ef74f63d4438c74a6d9699af300d" exitCode=0 Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.162769 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df9ea125-5b52-4638-830a-642e1edeacda","Type":"ContainerDied","Data":"a908af24ecb02b4fb391deca169fde709052ef74f63d4438c74a6d9699af300d"} Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.162815 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df9ea125-5b52-4638-830a-642e1edeacda","Type":"ContainerDied","Data":"da5d8223bee83f47ed2341fdad6747a562ead4d93ea3e3986aa3ca3dd5136a7e"} Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.162827 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da5d8223bee83f47ed2341fdad6747a562ead4d93ea3e3986aa3ca3dd5136a7e" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.164724 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea985e9-30fb-4e8e-8fd9-29c156245bfd","Type":"ContainerStarted","Data":"516f75b7d74defdcb7a479b4d3e06b6fd2c74224f7ccee76fab6b19e2e77539b"} Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.177176 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.320563 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-internal-tls-certs\") pod \"df9ea125-5b52-4638-830a-642e1edeacda\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.320618 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df9ea125-5b52-4638-830a-642e1edeacda-logs\") pod \"df9ea125-5b52-4638-830a-642e1edeacda\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.320650 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-config-data\") pod \"df9ea125-5b52-4638-830a-642e1edeacda\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.320692 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-combined-ca-bundle\") pod \"df9ea125-5b52-4638-830a-642e1edeacda\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.320809 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82pm7\" (UniqueName: \"kubernetes.io/projected/df9ea125-5b52-4638-830a-642e1edeacda-kube-api-access-82pm7\") pod \"df9ea125-5b52-4638-830a-642e1edeacda\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.320832 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-public-tls-certs\") pod \"df9ea125-5b52-4638-830a-642e1edeacda\" (UID: \"df9ea125-5b52-4638-830a-642e1edeacda\") " Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.321110 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df9ea125-5b52-4638-830a-642e1edeacda-logs" (OuterVolumeSpecName: "logs") pod "df9ea125-5b52-4638-830a-642e1edeacda" (UID: "df9ea125-5b52-4638-830a-642e1edeacda"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.324844 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9ea125-5b52-4638-830a-642e1edeacda-kube-api-access-82pm7" (OuterVolumeSpecName: "kube-api-access-82pm7") pod "df9ea125-5b52-4638-830a-642e1edeacda" (UID: "df9ea125-5b52-4638-830a-642e1edeacda"). InnerVolumeSpecName "kube-api-access-82pm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.345669 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df9ea125-5b52-4638-830a-642e1edeacda" (UID: "df9ea125-5b52-4638-830a-642e1edeacda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.348617 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-config-data" (OuterVolumeSpecName: "config-data") pod "df9ea125-5b52-4638-830a-642e1edeacda" (UID: "df9ea125-5b52-4638-830a-642e1edeacda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.376364 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "df9ea125-5b52-4638-830a-642e1edeacda" (UID: "df9ea125-5b52-4638-830a-642e1edeacda"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.380608 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "df9ea125-5b52-4638-830a-642e1edeacda" (UID: "df9ea125-5b52-4638-830a-642e1edeacda"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.422931 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df9ea125-5b52-4638-830a-642e1edeacda-logs\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.422978 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.422998 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.423017 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82pm7\" (UniqueName: \"kubernetes.io/projected/df9ea125-5b52-4638-830a-642e1edeacda-kube-api-access-82pm7\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.423033 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:19 crc kubenswrapper[4728]: I0204 11:49:19.423045 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9ea125-5b52-4638-830a-642e1edeacda-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.174990 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36828b26-ee22-4926-82ba-21d3c7be7f6d","Type":"ContainerStarted","Data":"bf2b03abef4da987fff3a8cd8345cc768a842667cef6a16e5724f1aa0c6e49ef"} Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.177880 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.177926 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea985e9-30fb-4e8e-8fd9-29c156245bfd","Type":"ContainerStarted","Data":"8c550b9fc4d90368c15cfda87a9b0e1f529d9abb4480d1c0cbf5a5127b67df31"} Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.177963 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea985e9-30fb-4e8e-8fd9-29c156245bfd","Type":"ContainerStarted","Data":"826dc858de85d129493eca5ee568835d0dd58a0d6428dbdafb624247571cf98b"} Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.205846 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.205829799 podStartE2EDuration="3.205829799s" podCreationTimestamp="2026-02-04 11:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:49:20.195213897 +0000 UTC m=+1309.337918302" watchObservedRunningTime="2026-02-04 11:49:20.205829799 +0000 UTC m=+1309.348534184" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.220580 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.228530 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.240311 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 04 11:49:20 crc kubenswrapper[4728]: E0204 11:49:20.240669 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9ea125-5b52-4638-830a-642e1edeacda" containerName="nova-api-api" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.240684 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9ea125-5b52-4638-830a-642e1edeacda" containerName="nova-api-api" Feb 04 11:49:20 crc kubenswrapper[4728]: E0204 11:49:20.240720 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9ea125-5b52-4638-830a-642e1edeacda" containerName="nova-api-log" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.240727 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9ea125-5b52-4638-830a-642e1edeacda" containerName="nova-api-log" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.240924 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9ea125-5b52-4638-830a-642e1edeacda" containerName="nova-api-api" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.240938 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9ea125-5b52-4638-830a-642e1edeacda" containerName="nova-api-log" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.241903 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.246600 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.246934 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.247246 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.252487 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.252467881 podStartE2EDuration="3.252467881s" podCreationTimestamp="2026-02-04 11:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:49:20.239982506 +0000 UTC m=+1309.382686891" watchObservedRunningTime="2026-02-04 11:49:20.252467881 +0000 UTC m=+1309.395172266" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.271576 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.344580 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvkb9\" (UniqueName: \"kubernetes.io/projected/63d40f77-97e6-4954-9b50-4d2c6032b5b8-kube-api-access-zvkb9\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.344671 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.344699 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-public-tls-certs\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.344799 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-config-data\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.344836 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63d40f77-97e6-4954-9b50-4d2c6032b5b8-logs\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.344946 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.446994 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-config-data\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.447048 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63d40f77-97e6-4954-9b50-4d2c6032b5b8-logs\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.447149 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.447197 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvkb9\" (UniqueName: \"kubernetes.io/projected/63d40f77-97e6-4954-9b50-4d2c6032b5b8-kube-api-access-zvkb9\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.447242 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.447266 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-public-tls-certs\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.448399 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63d40f77-97e6-4954-9b50-4d2c6032b5b8-logs\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.452743 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.452953 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-config-data\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.453292 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.461126 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63d40f77-97e6-4954-9b50-4d2c6032b5b8-public-tls-certs\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.468292 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvkb9\" (UniqueName: \"kubernetes.io/projected/63d40f77-97e6-4954-9b50-4d2c6032b5b8-kube-api-access-zvkb9\") pod \"nova-api-0\" (UID: \"63d40f77-97e6-4954-9b50-4d2c6032b5b8\") " pod="openstack/nova-api-0" Feb 04 11:49:20 crc kubenswrapper[4728]: I0204 11:49:20.563018 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 04 11:49:21 crc kubenswrapper[4728]: I0204 11:49:21.081466 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 04 11:49:21 crc kubenswrapper[4728]: W0204 11:49:21.102857 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63d40f77_97e6_4954_9b50_4d2c6032b5b8.slice/crio-180031d24e945a332f63caff267da2031588e37c279d1ec605eccc86003dc0b8 WatchSource:0}: Error finding container 180031d24e945a332f63caff267da2031588e37c279d1ec605eccc86003dc0b8: Status 404 returned error can't find the container with id 180031d24e945a332f63caff267da2031588e37c279d1ec605eccc86003dc0b8 Feb 04 11:49:21 crc kubenswrapper[4728]: I0204 11:49:21.189488 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63d40f77-97e6-4954-9b50-4d2c6032b5b8","Type":"ContainerStarted","Data":"180031d24e945a332f63caff267da2031588e37c279d1ec605eccc86003dc0b8"} Feb 04 11:49:21 crc kubenswrapper[4728]: I0204 11:49:21.569933 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df9ea125-5b52-4638-830a-642e1edeacda" path="/var/lib/kubelet/pods/df9ea125-5b52-4638-830a-642e1edeacda/volumes" Feb 04 11:49:22 crc kubenswrapper[4728]: I0204 11:49:22.206522 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63d40f77-97e6-4954-9b50-4d2c6032b5b8","Type":"ContainerStarted","Data":"7ff3024b63a012e459c743b9b7009abde61d1e167a513799e6e6d7d8acc88f57"} Feb 04 11:49:22 crc kubenswrapper[4728]: I0204 11:49:22.206584 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63d40f77-97e6-4954-9b50-4d2c6032b5b8","Type":"ContainerStarted","Data":"3790691e0c0385d4c65c6020fd4a5fc70bf99ce669dd2d9aeee6000c2247d2f0"} Feb 04 11:49:22 crc kubenswrapper[4728]: I0204 11:49:22.226311 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.226290974 podStartE2EDuration="2.226290974s" podCreationTimestamp="2026-02-04 11:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:49:22.226121031 +0000 UTC m=+1311.368825416" watchObservedRunningTime="2026-02-04 11:49:22.226290974 +0000 UTC m=+1311.368995359" Feb 04 11:49:22 crc kubenswrapper[4728]: I0204 11:49:22.609828 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 11:49:22 crc kubenswrapper[4728]: I0204 11:49:22.609883 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 04 11:49:22 crc kubenswrapper[4728]: I0204 11:49:22.623968 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 04 11:49:27 crc kubenswrapper[4728]: I0204 11:49:27.609576 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 04 11:49:27 crc kubenswrapper[4728]: I0204 11:49:27.610110 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 04 11:49:27 crc kubenswrapper[4728]: I0204 11:49:27.624433 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 04 11:49:27 crc kubenswrapper[4728]: I0204 11:49:27.651585 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 04 11:49:28 crc kubenswrapper[4728]: I0204 11:49:28.297681 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 04 11:49:28 crc kubenswrapper[4728]: I0204 11:49:28.622920 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8ea985e9-30fb-4e8e-8fd9-29c156245bfd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 04 11:49:28 crc kubenswrapper[4728]: I0204 11:49:28.622920 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8ea985e9-30fb-4e8e-8fd9-29c156245bfd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 04 11:49:30 crc kubenswrapper[4728]: I0204 11:49:30.317822 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 04 11:49:30 crc kubenswrapper[4728]: I0204 11:49:30.563309 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 11:49:30 crc kubenswrapper[4728]: I0204 11:49:30.563384 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 04 11:49:31 crc kubenswrapper[4728]: I0204 11:49:31.611037 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63d40f77-97e6-4954-9b50-4d2c6032b5b8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.217:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 04 11:49:31 crc kubenswrapper[4728]: I0204 11:49:31.611045 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63d40f77-97e6-4954-9b50-4d2c6032b5b8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.217:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 04 11:49:33 crc kubenswrapper[4728]: I0204 11:49:33.725468 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 11:49:33 crc kubenswrapper[4728]: I0204 11:49:33.726025 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f36f4b27-e48a-40a7-9179-9ad5146a1ce7" containerName="kube-state-metrics" containerID="cri-o://63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631" gracePeriod=30 Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.193582 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.230808 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-586p4\" (UniqueName: \"kubernetes.io/projected/f36f4b27-e48a-40a7-9179-9ad5146a1ce7-kube-api-access-586p4\") pod \"f36f4b27-e48a-40a7-9179-9ad5146a1ce7\" (UID: \"f36f4b27-e48a-40a7-9179-9ad5146a1ce7\") " Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.262197 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f36f4b27-e48a-40a7-9179-9ad5146a1ce7-kube-api-access-586p4" (OuterVolumeSpecName: "kube-api-access-586p4") pod "f36f4b27-e48a-40a7-9179-9ad5146a1ce7" (UID: "f36f4b27-e48a-40a7-9179-9ad5146a1ce7"). InnerVolumeSpecName "kube-api-access-586p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.324196 4728 generic.go:334] "Generic (PLEG): container finished" podID="f36f4b27-e48a-40a7-9179-9ad5146a1ce7" containerID="63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631" exitCode=2 Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.324255 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f36f4b27-e48a-40a7-9179-9ad5146a1ce7","Type":"ContainerDied","Data":"63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631"} Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.324287 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f36f4b27-e48a-40a7-9179-9ad5146a1ce7","Type":"ContainerDied","Data":"27440b069b1eee629c292e8e2e4898de898aaff72f4e65dd94a08b9195dcdca8"} Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.324308 4728 scope.go:117] "RemoveContainer" containerID="63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.324491 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.332703 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-586p4\" (UniqueName: \"kubernetes.io/projected/f36f4b27-e48a-40a7-9179-9ad5146a1ce7-kube-api-access-586p4\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.372318 4728 scope.go:117] "RemoveContainer" containerID="63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631" Feb 04 11:49:34 crc kubenswrapper[4728]: E0204 11:49:34.374809 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631\": container with ID starting with 63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631 not found: ID does not exist" containerID="63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.374851 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631"} err="failed to get container status \"63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631\": rpc error: code = NotFound desc = could not find container \"63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631\": container with ID starting with 63acf97ea13436e6e892cf4b119f805b408db9cef07c78cbd9b8dd20bced6631 not found: ID does not exist" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.398796 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.405906 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.430296 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 11:49:34 crc kubenswrapper[4728]: E0204 11:49:34.430639 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36f4b27-e48a-40a7-9179-9ad5146a1ce7" containerName="kube-state-metrics" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.430659 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36f4b27-e48a-40a7-9179-9ad5146a1ce7" containerName="kube-state-metrics" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.430870 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f36f4b27-e48a-40a7-9179-9ad5146a1ce7" containerName="kube-state-metrics" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.431562 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.433833 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9g4p\" (UniqueName: \"kubernetes.io/projected/e33c5244-2507-465e-8565-bfbc216f6382-kube-api-access-t9g4p\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.433874 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e33c5244-2507-465e-8565-bfbc216f6382-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.433923 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33c5244-2507-465e-8565-bfbc216f6382-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.433989 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c5244-2507-465e-8565-bfbc216f6382-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.437209 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.437236 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.452092 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.536170 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e33c5244-2507-465e-8565-bfbc216f6382-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.536289 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33c5244-2507-465e-8565-bfbc216f6382-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.536366 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c5244-2507-465e-8565-bfbc216f6382-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.536501 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9g4p\" (UniqueName: \"kubernetes.io/projected/e33c5244-2507-465e-8565-bfbc216f6382-kube-api-access-t9g4p\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.540955 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c5244-2507-465e-8565-bfbc216f6382-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.540964 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33c5244-2507-465e-8565-bfbc216f6382-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.551213 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e33c5244-2507-465e-8565-bfbc216f6382-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.569048 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9g4p\" (UniqueName: \"kubernetes.io/projected/e33c5244-2507-465e-8565-bfbc216f6382-kube-api-access-t9g4p\") pod \"kube-state-metrics-0\" (UID: \"e33c5244-2507-465e-8565-bfbc216f6382\") " pod="openstack/kube-state-metrics-0" Feb 04 11:49:34 crc kubenswrapper[4728]: I0204 11:49:34.754585 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 04 11:49:35 crc kubenswrapper[4728]: W0204 11:49:35.221093 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode33c5244_2507_465e_8565_bfbc216f6382.slice/crio-292a4f788ca1541f76091b4aa6a7ea62d2916fb65483dc53eb46ffa604bea633 WatchSource:0}: Error finding container 292a4f788ca1541f76091b4aa6a7ea62d2916fb65483dc53eb46ffa604bea633: Status 404 returned error can't find the container with id 292a4f788ca1541f76091b4aa6a7ea62d2916fb65483dc53eb46ffa604bea633 Feb 04 11:49:35 crc kubenswrapper[4728]: I0204 11:49:35.223274 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 11:49:35 crc kubenswrapper[4728]: I0204 11:49:35.228317 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 04 11:49:35 crc kubenswrapper[4728]: I0204 11:49:35.333341 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e33c5244-2507-465e-8565-bfbc216f6382","Type":"ContainerStarted","Data":"292a4f788ca1541f76091b4aa6a7ea62d2916fb65483dc53eb46ffa604bea633"} Feb 04 11:49:35 crc kubenswrapper[4728]: I0204 11:49:35.448450 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:49:35 crc kubenswrapper[4728]: I0204 11:49:35.448507 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:49:35 crc kubenswrapper[4728]: I0204 11:49:35.563769 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f36f4b27-e48a-40a7-9179-9ad5146a1ce7" path="/var/lib/kubelet/pods/f36f4b27-e48a-40a7-9179-9ad5146a1ce7/volumes" Feb 04 11:49:35 crc kubenswrapper[4728]: I0204 11:49:35.753262 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:49:35 crc kubenswrapper[4728]: I0204 11:49:35.753784 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="ceilometer-central-agent" containerID="cri-o://d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c" gracePeriod=30 Feb 04 11:49:35 crc kubenswrapper[4728]: I0204 11:49:35.753856 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="sg-core" containerID="cri-o://8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96" gracePeriod=30 Feb 04 11:49:35 crc kubenswrapper[4728]: I0204 11:49:35.753927 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="ceilometer-notification-agent" containerID="cri-o://b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db" gracePeriod=30 Feb 04 11:49:35 crc kubenswrapper[4728]: I0204 11:49:35.753993 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="proxy-httpd" containerID="cri-o://5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc" gracePeriod=30 Feb 04 11:49:36 crc kubenswrapper[4728]: I0204 11:49:36.348715 4728 generic.go:334] "Generic (PLEG): container finished" podID="af36dea8-dd50-4419-9077-8832092343b5" containerID="5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc" exitCode=0 Feb 04 11:49:36 crc kubenswrapper[4728]: I0204 11:49:36.348800 4728 generic.go:334] "Generic (PLEG): container finished" podID="af36dea8-dd50-4419-9077-8832092343b5" containerID="8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96" exitCode=2 Feb 04 11:49:36 crc kubenswrapper[4728]: I0204 11:49:36.348802 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af36dea8-dd50-4419-9077-8832092343b5","Type":"ContainerDied","Data":"5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc"} Feb 04 11:49:36 crc kubenswrapper[4728]: I0204 11:49:36.348856 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af36dea8-dd50-4419-9077-8832092343b5","Type":"ContainerDied","Data":"8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96"} Feb 04 11:49:36 crc kubenswrapper[4728]: I0204 11:49:36.348870 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af36dea8-dd50-4419-9077-8832092343b5","Type":"ContainerDied","Data":"d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c"} Feb 04 11:49:36 crc kubenswrapper[4728]: I0204 11:49:36.348822 4728 generic.go:334] "Generic (PLEG): container finished" podID="af36dea8-dd50-4419-9077-8832092343b5" containerID="d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c" exitCode=0 Feb 04 11:49:36 crc kubenswrapper[4728]: I0204 11:49:36.351141 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e33c5244-2507-465e-8565-bfbc216f6382","Type":"ContainerStarted","Data":"b75c5d369fe6c35f57b198b91121eeb189eb7c2ba06dff774bc7adb87969a081"} Feb 04 11:49:36 crc kubenswrapper[4728]: I0204 11:49:36.351313 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 04 11:49:36 crc kubenswrapper[4728]: I0204 11:49:36.378387 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.953535381 podStartE2EDuration="2.378368965s" podCreationTimestamp="2026-02-04 11:49:34 +0000 UTC" firstStartedPulling="2026-02-04 11:49:35.223093363 +0000 UTC m=+1324.365797748" lastFinishedPulling="2026-02-04 11:49:35.647926947 +0000 UTC m=+1324.790631332" observedRunningTime="2026-02-04 11:49:36.371425981 +0000 UTC m=+1325.514130376" watchObservedRunningTime="2026-02-04 11:49:36.378368965 +0000 UTC m=+1325.521073360" Feb 04 11:49:37 crc kubenswrapper[4728]: I0204 11:49:37.638553 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 04 11:49:37 crc kubenswrapper[4728]: I0204 11:49:37.640397 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 04 11:49:37 crc kubenswrapper[4728]: I0204 11:49:37.643782 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 04 11:49:38 crc kubenswrapper[4728]: I0204 11:49:38.375763 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.198870 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.233665 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-run-httpd\") pod \"af36dea8-dd50-4419-9077-8832092343b5\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.233719 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jkdh\" (UniqueName: \"kubernetes.io/projected/af36dea8-dd50-4419-9077-8832092343b5-kube-api-access-6jkdh\") pod \"af36dea8-dd50-4419-9077-8832092343b5\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.233802 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-log-httpd\") pod \"af36dea8-dd50-4419-9077-8832092343b5\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.233924 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-scripts\") pod \"af36dea8-dd50-4419-9077-8832092343b5\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.233958 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-config-data\") pod \"af36dea8-dd50-4419-9077-8832092343b5\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.234039 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-sg-core-conf-yaml\") pod \"af36dea8-dd50-4419-9077-8832092343b5\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.234127 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-combined-ca-bundle\") pod \"af36dea8-dd50-4419-9077-8832092343b5\" (UID: \"af36dea8-dd50-4419-9077-8832092343b5\") " Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.234139 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "af36dea8-dd50-4419-9077-8832092343b5" (UID: "af36dea8-dd50-4419-9077-8832092343b5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.234648 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.234780 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "af36dea8-dd50-4419-9077-8832092343b5" (UID: "af36dea8-dd50-4419-9077-8832092343b5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.242092 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af36dea8-dd50-4419-9077-8832092343b5-kube-api-access-6jkdh" (OuterVolumeSpecName: "kube-api-access-6jkdh") pod "af36dea8-dd50-4419-9077-8832092343b5" (UID: "af36dea8-dd50-4419-9077-8832092343b5"). InnerVolumeSpecName "kube-api-access-6jkdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.242416 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-scripts" (OuterVolumeSpecName: "scripts") pod "af36dea8-dd50-4419-9077-8832092343b5" (UID: "af36dea8-dd50-4419-9077-8832092343b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.301515 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "af36dea8-dd50-4419-9077-8832092343b5" (UID: "af36dea8-dd50-4419-9077-8832092343b5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.324898 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af36dea8-dd50-4419-9077-8832092343b5" (UID: "af36dea8-dd50-4419-9077-8832092343b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.336172 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.336203 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.336214 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jkdh\" (UniqueName: \"kubernetes.io/projected/af36dea8-dd50-4419-9077-8832092343b5-kube-api-access-6jkdh\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.336225 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af36dea8-dd50-4419-9077-8832092343b5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.336233 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.371668 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-config-data" (OuterVolumeSpecName: "config-data") pod "af36dea8-dd50-4419-9077-8832092343b5" (UID: "af36dea8-dd50-4419-9077-8832092343b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.385985 4728 generic.go:334] "Generic (PLEG): container finished" podID="af36dea8-dd50-4419-9077-8832092343b5" containerID="b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db" exitCode=0 Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.387094 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.391919 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af36dea8-dd50-4419-9077-8832092343b5","Type":"ContainerDied","Data":"b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db"} Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.391976 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af36dea8-dd50-4419-9077-8832092343b5","Type":"ContainerDied","Data":"92494c5f5b0fde69c3c4d731c3bf6ec439d459291ac24ab2d8d25c9669921483"} Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.392000 4728 scope.go:117] "RemoveContainer" containerID="5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.425340 4728 scope.go:117] "RemoveContainer" containerID="8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.436958 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af36dea8-dd50-4419-9077-8832092343b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.447198 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.459688 4728 scope.go:117] "RemoveContainer" containerID="b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.460160 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.488840 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:49:39 crc kubenswrapper[4728]: E0204 11:49:39.489614 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="ceilometer-notification-agent" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.489640 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="ceilometer-notification-agent" Feb 04 11:49:39 crc kubenswrapper[4728]: E0204 11:49:39.489688 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="sg-core" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.489699 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="sg-core" Feb 04 11:49:39 crc kubenswrapper[4728]: E0204 11:49:39.489719 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="ceilometer-central-agent" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.489728 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="ceilometer-central-agent" Feb 04 11:49:39 crc kubenswrapper[4728]: E0204 11:49:39.489849 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="proxy-httpd" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.489864 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="proxy-httpd" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.490177 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="sg-core" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.490207 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="ceilometer-central-agent" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.490249 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="proxy-httpd" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.490268 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="af36dea8-dd50-4419-9077-8832092343b5" containerName="ceilometer-notification-agent" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.493172 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.496332 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.496546 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.497009 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.507357 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.523034 4728 scope.go:117] "RemoveContainer" containerID="d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.538292 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dbbh\" (UniqueName: \"kubernetes.io/projected/a1c17abe-702e-43d8-99cc-4e0a1b932990-kube-api-access-9dbbh\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.538342 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.538364 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-config-data\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.538419 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-scripts\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.538444 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.538503 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1c17abe-702e-43d8-99cc-4e0a1b932990-run-httpd\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.538556 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1c17abe-702e-43d8-99cc-4e0a1b932990-log-httpd\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.538580 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.544410 4728 scope.go:117] "RemoveContainer" containerID="5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc" Feb 04 11:49:39 crc kubenswrapper[4728]: E0204 11:49:39.544889 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc\": container with ID starting with 5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc not found: ID does not exist" containerID="5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.544917 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc"} err="failed to get container status \"5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc\": rpc error: code = NotFound desc = could not find container \"5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc\": container with ID starting with 5b3845b69f61978b0744818b226ff7cde2fda32a3755f7754d0eba3943a8e4bc not found: ID does not exist" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.544938 4728 scope.go:117] "RemoveContainer" containerID="8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96" Feb 04 11:49:39 crc kubenswrapper[4728]: E0204 11:49:39.545979 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96\": container with ID starting with 8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96 not found: ID does not exist" containerID="8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.546026 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96"} err="failed to get container status \"8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96\": rpc error: code = NotFound desc = could not find container \"8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96\": container with ID starting with 8bcc69ec952ebf0d392a6b06fd3e5ca022310359bd3352f1242545d09086dc96 not found: ID does not exist" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.546041 4728 scope.go:117] "RemoveContainer" containerID="b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db" Feb 04 11:49:39 crc kubenswrapper[4728]: E0204 11:49:39.546507 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db\": container with ID starting with b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db not found: ID does not exist" containerID="b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.546567 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db"} err="failed to get container status \"b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db\": rpc error: code = NotFound desc = could not find container \"b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db\": container with ID starting with b3753b8b8a3285c095211eedf7569777e2e7c9410c7ddb80bbbb381570ff89db not found: ID does not exist" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.546603 4728 scope.go:117] "RemoveContainer" containerID="d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c" Feb 04 11:49:39 crc kubenswrapper[4728]: E0204 11:49:39.546904 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c\": container with ID starting with d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c not found: ID does not exist" containerID="d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.546949 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c"} err="failed to get container status \"d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c\": rpc error: code = NotFound desc = could not find container \"d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c\": container with ID starting with d9d0f043fb950140530f884950e53ceee1f4dcdcb6041f5c64eece57a86a2b5c not found: ID does not exist" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.570417 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af36dea8-dd50-4419-9077-8832092343b5" path="/var/lib/kubelet/pods/af36dea8-dd50-4419-9077-8832092343b5/volumes" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.640237 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.640296 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-config-data\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.640335 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-scripts\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.640357 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.640411 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1c17abe-702e-43d8-99cc-4e0a1b932990-run-httpd\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.640463 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1c17abe-702e-43d8-99cc-4e0a1b932990-log-httpd\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.640509 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.640561 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dbbh\" (UniqueName: \"kubernetes.io/projected/a1c17abe-702e-43d8-99cc-4e0a1b932990-kube-api-access-9dbbh\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.641336 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1c17abe-702e-43d8-99cc-4e0a1b932990-log-httpd\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.645411 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.645675 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1c17abe-702e-43d8-99cc-4e0a1b932990-run-httpd\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.645899 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-config-data\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.646994 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.647500 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.648398 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c17abe-702e-43d8-99cc-4e0a1b932990-scripts\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.656699 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dbbh\" (UniqueName: \"kubernetes.io/projected/a1c17abe-702e-43d8-99cc-4e0a1b932990-kube-api-access-9dbbh\") pod \"ceilometer-0\" (UID: \"a1c17abe-702e-43d8-99cc-4e0a1b932990\") " pod="openstack/ceilometer-0" Feb 04 11:49:39 crc kubenswrapper[4728]: I0204 11:49:39.823244 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 04 11:49:40 crc kubenswrapper[4728]: I0204 11:49:40.326547 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 04 11:49:40 crc kubenswrapper[4728]: I0204 11:49:40.397112 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1c17abe-702e-43d8-99cc-4e0a1b932990","Type":"ContainerStarted","Data":"b9ef59b00031eb61ca16b25c5510b3a76269006bef508a6c695db4d19e5d8c94"} Feb 04 11:49:40 crc kubenswrapper[4728]: I0204 11:49:40.572361 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 04 11:49:40 crc kubenswrapper[4728]: I0204 11:49:40.572788 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 04 11:49:40 crc kubenswrapper[4728]: I0204 11:49:40.575477 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 04 11:49:40 crc kubenswrapper[4728]: I0204 11:49:40.579504 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 04 11:49:41 crc kubenswrapper[4728]: I0204 11:49:41.410515 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1c17abe-702e-43d8-99cc-4e0a1b932990","Type":"ContainerStarted","Data":"5776fb98272aa54bce18de451bc5a6d3b0724b03d7d6c84764b7feab4beb6c3e"} Feb 04 11:49:41 crc kubenswrapper[4728]: I0204 11:49:41.410832 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 04 11:49:41 crc kubenswrapper[4728]: I0204 11:49:41.419795 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 04 11:49:42 crc kubenswrapper[4728]: I0204 11:49:42.422281 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1c17abe-702e-43d8-99cc-4e0a1b932990","Type":"ContainerStarted","Data":"b7714633ac6a50bf66282f20e5cb0ddc29a8516cf1b8896a910224ee833fce01"} Feb 04 11:49:43 crc kubenswrapper[4728]: I0204 11:49:43.431709 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1c17abe-702e-43d8-99cc-4e0a1b932990","Type":"ContainerStarted","Data":"b693ee1e6f5c2ef39f9701c784d8348e24722d9ffb81ecd6fd4771c88de252c9"} Feb 04 11:49:44 crc kubenswrapper[4728]: I0204 11:49:44.772598 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 04 11:49:46 crc kubenswrapper[4728]: I0204 11:49:46.463174 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1c17abe-702e-43d8-99cc-4e0a1b932990","Type":"ContainerStarted","Data":"e2678c1074a8317c29ff913231ac114c9586c50bcf452b2f4a4819383dc561a1"} Feb 04 11:49:46 crc kubenswrapper[4728]: I0204 11:49:46.464664 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 04 11:50:05 crc kubenswrapper[4728]: I0204 11:50:05.448658 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:50:05 crc kubenswrapper[4728]: I0204 11:50:05.449819 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:50:05 crc kubenswrapper[4728]: I0204 11:50:05.449933 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:50:05 crc kubenswrapper[4728]: I0204 11:50:05.450488 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df14b9397f5cab1fc5b2e7a5ea922d0337cd8f0ecd7c5a6f65afe229e61d080f"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 11:50:05 crc kubenswrapper[4728]: I0204 11:50:05.450533 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://df14b9397f5cab1fc5b2e7a5ea922d0337cd8f0ecd7c5a6f65afe229e61d080f" gracePeriod=600 Feb 04 11:50:05 crc kubenswrapper[4728]: I0204 11:50:05.693040 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="df14b9397f5cab1fc5b2e7a5ea922d0337cd8f0ecd7c5a6f65afe229e61d080f" exitCode=0 Feb 04 11:50:05 crc kubenswrapper[4728]: I0204 11:50:05.693095 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"df14b9397f5cab1fc5b2e7a5ea922d0337cd8f0ecd7c5a6f65afe229e61d080f"} Feb 04 11:50:05 crc kubenswrapper[4728]: I0204 11:50:05.693132 4728 scope.go:117] "RemoveContainer" containerID="c9955d85c683603107a50c1f93858af2076e1b6307f2485d080e9953f839e1ba" Feb 04 11:50:06 crc kubenswrapper[4728]: I0204 11:50:06.703524 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994"} Feb 04 11:50:06 crc kubenswrapper[4728]: I0204 11:50:06.726852 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=22.011862002 podStartE2EDuration="27.726832149s" podCreationTimestamp="2026-02-04 11:49:39 +0000 UTC" firstStartedPulling="2026-02-04 11:49:40.331619655 +0000 UTC m=+1329.474324060" lastFinishedPulling="2026-02-04 11:49:46.046589822 +0000 UTC m=+1335.189294207" observedRunningTime="2026-02-04 11:49:46.486949923 +0000 UTC m=+1335.629654318" watchObservedRunningTime="2026-02-04 11:50:06.726832149 +0000 UTC m=+1355.869536534" Feb 04 11:50:09 crc kubenswrapper[4728]: I0204 11:50:09.889384 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 04 11:50:19 crc kubenswrapper[4728]: I0204 11:50:19.004364 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 11:50:19 crc kubenswrapper[4728]: I0204 11:50:19.998736 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 11:50:23 crc kubenswrapper[4728]: I0204 11:50:23.180352 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" containerName="rabbitmq" containerID="cri-o://8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6" gracePeriod=604796 Feb 04 11:50:24 crc kubenswrapper[4728]: I0204 11:50:24.074635 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="23b1eaab-360d-4438-b68d-0d61f21ff593" containerName="rabbitmq" containerID="cri-o://e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414" gracePeriod=604796 Feb 04 11:50:24 crc kubenswrapper[4728]: I0204 11:50:24.776157 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kvltd"] Feb 04 11:50:24 crc kubenswrapper[4728]: I0204 11:50:24.778504 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:24 crc kubenswrapper[4728]: I0204 11:50:24.793392 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kvltd"] Feb 04 11:50:24 crc kubenswrapper[4728]: I0204 11:50:24.903913 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-catalog-content\") pod \"redhat-operators-kvltd\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:24 crc kubenswrapper[4728]: I0204 11:50:24.903960 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-utilities\") pod \"redhat-operators-kvltd\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:24 crc kubenswrapper[4728]: I0204 11:50:24.904141 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pfsn\" (UniqueName: \"kubernetes.io/projected/2d51cc6e-76e2-4865-9679-4385711b8e0a-kube-api-access-2pfsn\") pod \"redhat-operators-kvltd\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:25 crc kubenswrapper[4728]: I0204 11:50:25.005947 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-catalog-content\") pod \"redhat-operators-kvltd\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:25 crc kubenswrapper[4728]: I0204 11:50:25.005994 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-utilities\") pod \"redhat-operators-kvltd\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:25 crc kubenswrapper[4728]: I0204 11:50:25.006032 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pfsn\" (UniqueName: \"kubernetes.io/projected/2d51cc6e-76e2-4865-9679-4385711b8e0a-kube-api-access-2pfsn\") pod \"redhat-operators-kvltd\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:25 crc kubenswrapper[4728]: I0204 11:50:25.006448 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-catalog-content\") pod \"redhat-operators-kvltd\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:25 crc kubenswrapper[4728]: I0204 11:50:25.006538 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-utilities\") pod \"redhat-operators-kvltd\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:25 crc kubenswrapper[4728]: I0204 11:50:25.025304 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pfsn\" (UniqueName: \"kubernetes.io/projected/2d51cc6e-76e2-4865-9679-4385711b8e0a-kube-api-access-2pfsn\") pod \"redhat-operators-kvltd\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:25 crc kubenswrapper[4728]: I0204 11:50:25.108165 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:25 crc kubenswrapper[4728]: I0204 11:50:25.585850 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kvltd"] Feb 04 11:50:25 crc kubenswrapper[4728]: I0204 11:50:25.910248 4728 generic.go:334] "Generic (PLEG): container finished" podID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerID="c15ea578bcdde4847e18ef5a32a58584a8e98a6e59a8d40cb7200bc98220139a" exitCode=0 Feb 04 11:50:25 crc kubenswrapper[4728]: I0204 11:50:25.910300 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvltd" event={"ID":"2d51cc6e-76e2-4865-9679-4385711b8e0a","Type":"ContainerDied","Data":"c15ea578bcdde4847e18ef5a32a58584a8e98a6e59a8d40cb7200bc98220139a"} Feb 04 11:50:25 crc kubenswrapper[4728]: I0204 11:50:25.910330 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvltd" event={"ID":"2d51cc6e-76e2-4865-9679-4385711b8e0a","Type":"ContainerStarted","Data":"43903361515e49e963c5d5b254550773f079917b1b276e74f7735cd3dde8792e"} Feb 04 11:50:28 crc kubenswrapper[4728]: I0204 11:50:28.942913 4728 generic.go:334] "Generic (PLEG): container finished" podID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerID="11a6481ed752aec9590f109637d01270ae6ca89b1106e7f4c74109b6336e5ece" exitCode=0 Feb 04 11:50:28 crc kubenswrapper[4728]: I0204 11:50:28.942970 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvltd" event={"ID":"2d51cc6e-76e2-4865-9679-4385711b8e0a","Type":"ContainerDied","Data":"11a6481ed752aec9590f109637d01270ae6ca89b1106e7f4c74109b6336e5ece"} Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.807679 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.909986 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.910127 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-confd\") pod \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.910197 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-config-data\") pod \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.910230 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xj26\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-kube-api-access-8xj26\") pod \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.910254 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-plugins-conf\") pod \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.910311 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-erlang-cookie\") pod \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.910361 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-plugins\") pod \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.910391 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-pod-info\") pod \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.910433 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-server-conf\") pod \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.910454 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-erlang-cookie-secret\") pod \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.910484 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-tls\") pod \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\" (UID: \"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c\") " Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.913031 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" (UID: "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.914703 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" (UID: "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.917970 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-pod-info" (OuterVolumeSpecName: "pod-info") pod "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" (UID: "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.921541 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-kube-api-access-8xj26" (OuterVolumeSpecName: "kube-api-access-8xj26") pod "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" (UID: "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c"). InnerVolumeSpecName "kube-api-access-8xj26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.925300 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" (UID: "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.927207 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" (UID: "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.942740 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" (UID: "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.953499 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" (UID: "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.963644 4728 generic.go:334] "Generic (PLEG): container finished" podID="b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" containerID="8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6" exitCode=0 Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.963697 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c","Type":"ContainerDied","Data":"8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6"} Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.963723 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0ffe05f-9876-4234-9d8f-cc886ffb9a6c","Type":"ContainerDied","Data":"ed8402716efb5f5e69156139a9189410ff61978a431618b31a5180568a99a271"} Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.963739 4728 scope.go:117] "RemoveContainer" containerID="8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.963871 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.968619 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvltd" event={"ID":"2d51cc6e-76e2-4865-9679-4385711b8e0a","Type":"ContainerStarted","Data":"19e9fdf44a296f2413ace72a5f69b981a1947a8506d6c27eef7c257d8da840c6"} Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.978712 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-config-data" (OuterVolumeSpecName: "config-data") pod "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" (UID: "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.986729 4728 scope.go:117] "RemoveContainer" containerID="e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9" Feb 04 11:50:29 crc kubenswrapper[4728]: I0204 11:50:29.995924 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kvltd" podStartSLOduration=2.539691506 podStartE2EDuration="5.995905905s" podCreationTimestamp="2026-02-04 11:50:24 +0000 UTC" firstStartedPulling="2026-02-04 11:50:25.912207362 +0000 UTC m=+1375.054911747" lastFinishedPulling="2026-02-04 11:50:29.368421721 +0000 UTC m=+1378.511126146" observedRunningTime="2026-02-04 11:50:29.993328554 +0000 UTC m=+1379.136032949" watchObservedRunningTime="2026-02-04 11:50:29.995905905 +0000 UTC m=+1379.138610290" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.000255 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-server-conf" (OuterVolumeSpecName: "server-conf") pod "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" (UID: "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.017049 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.017099 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xj26\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-kube-api-access-8xj26\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.017117 4728 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.017130 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.017141 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.017150 4728 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.017157 4728 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.017166 4728 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.017174 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.017208 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.034126 4728 scope.go:117] "RemoveContainer" containerID="8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6" Feb 04 11:50:30 crc kubenswrapper[4728]: E0204 11:50:30.036321 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6\": container with ID starting with 8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6 not found: ID does not exist" containerID="8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.036357 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6"} err="failed to get container status \"8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6\": rpc error: code = NotFound desc = could not find container \"8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6\": container with ID starting with 8261ddb7f582d50867e57d99a05501b4fd436f9ee50b64535f3ca03c070599a6 not found: ID does not exist" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.036377 4728 scope.go:117] "RemoveContainer" containerID="e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9" Feb 04 11:50:30 crc kubenswrapper[4728]: E0204 11:50:30.036738 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9\": container with ID starting with e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9 not found: ID does not exist" containerID="e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.036848 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9"} err="failed to get container status \"e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9\": rpc error: code = NotFound desc = could not find container \"e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9\": container with ID starting with e63e7d2b19ac37a200806dcc8b7576384f23c8d84e5d829b699f4b9c171a79b9 not found: ID does not exist" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.044764 4728 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.068614 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" (UID: "b0ffe05f-9876-4234-9d8f-cc886ffb9a6c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.118711 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.118744 4728 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.304787 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.314213 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.336626 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 11:50:30 crc kubenswrapper[4728]: E0204 11:50:30.337437 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" containerName="setup-container" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.337460 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" containerName="setup-container" Feb 04 11:50:30 crc kubenswrapper[4728]: E0204 11:50:30.337476 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" containerName="rabbitmq" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.337482 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" containerName="rabbitmq" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.337662 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" containerName="rabbitmq" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.338597 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.349271 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.350044 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wgqx5" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.350254 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.350398 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.350548 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.350656 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.350908 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.352606 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.425014 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f73f795-7173-4835-b233-b78a4bd41854-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.425070 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.425105 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f73f795-7173-4835-b233-b78a4bd41854-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.425326 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng5b9\" (UniqueName: \"kubernetes.io/projected/2f73f795-7173-4835-b233-b78a4bd41854-kube-api-access-ng5b9\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.425368 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f73f795-7173-4835-b233-b78a4bd41854-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.425519 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.425557 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f73f795-7173-4835-b233-b78a4bd41854-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.425608 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.425635 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.425656 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f73f795-7173-4835-b233-b78a4bd41854-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.425688 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.512547 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="23b1eaab-360d-4438-b68d-0d61f21ff593" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.529845 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng5b9\" (UniqueName: \"kubernetes.io/projected/2f73f795-7173-4835-b233-b78a4bd41854-kube-api-access-ng5b9\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.529893 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f73f795-7173-4835-b233-b78a4bd41854-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.529972 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.529999 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f73f795-7173-4835-b233-b78a4bd41854-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.530042 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.530062 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.530081 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f73f795-7173-4835-b233-b78a4bd41854-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.530223 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.530420 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f73f795-7173-4835-b233-b78a4bd41854-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.530453 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.530491 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f73f795-7173-4835-b233-b78a4bd41854-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.530860 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.530877 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.531304 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f73f795-7173-4835-b233-b78a4bd41854-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.531588 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f73f795-7173-4835-b233-b78a4bd41854-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.531644 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.531932 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f73f795-7173-4835-b233-b78a4bd41854-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.534020 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f73f795-7173-4835-b233-b78a4bd41854-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.534521 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f73f795-7173-4835-b233-b78a4bd41854-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.535102 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.536287 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f73f795-7173-4835-b233-b78a4bd41854-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.554926 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng5b9\" (UniqueName: \"kubernetes.io/projected/2f73f795-7173-4835-b233-b78a4bd41854-kube-api-access-ng5b9\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.605443 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"2f73f795-7173-4835-b233-b78a4bd41854\") " pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.699003 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.855528 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.936681 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-confd\") pod \"23b1eaab-360d-4438-b68d-0d61f21ff593\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.936850 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-tls\") pod \"23b1eaab-360d-4438-b68d-0d61f21ff593\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.936891 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-erlang-cookie\") pod \"23b1eaab-360d-4438-b68d-0d61f21ff593\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.936953 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"23b1eaab-360d-4438-b68d-0d61f21ff593\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.937008 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-server-conf\") pod \"23b1eaab-360d-4438-b68d-0d61f21ff593\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.937063 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hfmt\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-kube-api-access-4hfmt\") pod \"23b1eaab-360d-4438-b68d-0d61f21ff593\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.937102 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-plugins-conf\") pod \"23b1eaab-360d-4438-b68d-0d61f21ff593\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.937168 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23b1eaab-360d-4438-b68d-0d61f21ff593-erlang-cookie-secret\") pod \"23b1eaab-360d-4438-b68d-0d61f21ff593\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.937212 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-plugins\") pod \"23b1eaab-360d-4438-b68d-0d61f21ff593\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.937250 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23b1eaab-360d-4438-b68d-0d61f21ff593-pod-info\") pod \"23b1eaab-360d-4438-b68d-0d61f21ff593\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.937309 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-config-data\") pod \"23b1eaab-360d-4438-b68d-0d61f21ff593\" (UID: \"23b1eaab-360d-4438-b68d-0d61f21ff593\") " Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.937486 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "23b1eaab-360d-4438-b68d-0d61f21ff593" (UID: "23b1eaab-360d-4438-b68d-0d61f21ff593"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.937924 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.939441 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "23b1eaab-360d-4438-b68d-0d61f21ff593" (UID: "23b1eaab-360d-4438-b68d-0d61f21ff593"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.939507 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "23b1eaab-360d-4438-b68d-0d61f21ff593" (UID: "23b1eaab-360d-4438-b68d-0d61f21ff593"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.942431 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "23b1eaab-360d-4438-b68d-0d61f21ff593" (UID: "23b1eaab-360d-4438-b68d-0d61f21ff593"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.942859 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b1eaab-360d-4438-b68d-0d61f21ff593-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "23b1eaab-360d-4438-b68d-0d61f21ff593" (UID: "23b1eaab-360d-4438-b68d-0d61f21ff593"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.946032 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/23b1eaab-360d-4438-b68d-0d61f21ff593-pod-info" (OuterVolumeSpecName: "pod-info") pod "23b1eaab-360d-4438-b68d-0d61f21ff593" (UID: "23b1eaab-360d-4438-b68d-0d61f21ff593"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.950603 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-kube-api-access-4hfmt" (OuterVolumeSpecName: "kube-api-access-4hfmt") pod "23b1eaab-360d-4438-b68d-0d61f21ff593" (UID: "23b1eaab-360d-4438-b68d-0d61f21ff593"). InnerVolumeSpecName "kube-api-access-4hfmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.953303 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "23b1eaab-360d-4438-b68d-0d61f21ff593" (UID: "23b1eaab-360d-4438-b68d-0d61f21ff593"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.968113 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-config-data" (OuterVolumeSpecName: "config-data") pod "23b1eaab-360d-4438-b68d-0d61f21ff593" (UID: "23b1eaab-360d-4438-b68d-0d61f21ff593"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.979238 4728 generic.go:334] "Generic (PLEG): container finished" podID="23b1eaab-360d-4438-b68d-0d61f21ff593" containerID="e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414" exitCode=0 Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.979355 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.980881 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23b1eaab-360d-4438-b68d-0d61f21ff593","Type":"ContainerDied","Data":"e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414"} Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.980920 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"23b1eaab-360d-4438-b68d-0d61f21ff593","Type":"ContainerDied","Data":"2e5b77a4dbe2535e67cfab0a20d67cc9d3079e56b0f1b6921f667985aefb2f7d"} Feb 04 11:50:30 crc kubenswrapper[4728]: I0204 11:50:30.980962 4728 scope.go:117] "RemoveContainer" containerID="e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.016830 4728 scope.go:117] "RemoveContainer" containerID="ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.044181 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hfmt\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-kube-api-access-4hfmt\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.044215 4728 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.044230 4728 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/23b1eaab-360d-4438-b68d-0d61f21ff593-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.044243 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.044255 4728 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/23b1eaab-360d-4438-b68d-0d61f21ff593-pod-info\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.044266 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.044277 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.044301 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.045147 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-server-conf" (OuterVolumeSpecName: "server-conf") pod "23b1eaab-360d-4438-b68d-0d61f21ff593" (UID: "23b1eaab-360d-4438-b68d-0d61f21ff593"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.049671 4728 scope.go:117] "RemoveContainer" containerID="e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414" Feb 04 11:50:31 crc kubenswrapper[4728]: E0204 11:50:31.054422 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414\": container with ID starting with e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414 not found: ID does not exist" containerID="e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.054502 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414"} err="failed to get container status \"e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414\": rpc error: code = NotFound desc = could not find container \"e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414\": container with ID starting with e44f43366719121445f9dc966036ab0e2a79ec0ee3174a9412de2c51b31f7414 not found: ID does not exist" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.054527 4728 scope.go:117] "RemoveContainer" containerID="ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895" Feb 04 11:50:31 crc kubenswrapper[4728]: E0204 11:50:31.056132 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895\": container with ID starting with ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895 not found: ID does not exist" containerID="ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.056174 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895"} err="failed to get container status \"ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895\": rpc error: code = NotFound desc = could not find container \"ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895\": container with ID starting with ac8583e35a6b994fa8c97be236facaa2f2b84843ca81b61322364d028eb44895 not found: ID does not exist" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.085117 4728 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.102697 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "23b1eaab-360d-4438-b68d-0d61f21ff593" (UID: "23b1eaab-360d-4438-b68d-0d61f21ff593"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.153921 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/23b1eaab-360d-4438-b68d-0d61f21ff593-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.153953 4728 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.153963 4728 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/23b1eaab-360d-4438-b68d-0d61f21ff593-server-conf\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.178003 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 04 11:50:31 crc kubenswrapper[4728]: W0204 11:50:31.181253 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f73f795_7173_4835_b233_b78a4bd41854.slice/crio-040bc26ea3695a8cc6d710b988e639ab859be72d1625f44431f27de25e677b09 WatchSource:0}: Error finding container 040bc26ea3695a8cc6d710b988e639ab859be72d1625f44431f27de25e677b09: Status 404 returned error can't find the container with id 040bc26ea3695a8cc6d710b988e639ab859be72d1625f44431f27de25e677b09 Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.321208 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.334571 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.346286 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 11:50:31 crc kubenswrapper[4728]: E0204 11:50:31.346657 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b1eaab-360d-4438-b68d-0d61f21ff593" containerName="setup-container" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.346674 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b1eaab-360d-4438-b68d-0d61f21ff593" containerName="setup-container" Feb 04 11:50:31 crc kubenswrapper[4728]: E0204 11:50:31.346689 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b1eaab-360d-4438-b68d-0d61f21ff593" containerName="rabbitmq" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.346695 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b1eaab-360d-4438-b68d-0d61f21ff593" containerName="rabbitmq" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.346889 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b1eaab-360d-4438-b68d-0d61f21ff593" containerName="rabbitmq" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.347835 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.350293 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.350335 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.350347 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.350689 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6xmcq" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.351150 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.351817 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.352136 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.367118 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.458886 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.458932 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.458976 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.459005 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.459032 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.459094 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.459128 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.459157 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.459176 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfl2k\" (UniqueName: \"kubernetes.io/projected/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-kube-api-access-qfl2k\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.459234 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.459278 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561299 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561366 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561392 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561419 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfl2k\" (UniqueName: \"kubernetes.io/projected/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-kube-api-access-qfl2k\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561497 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561533 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561590 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561620 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561659 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561688 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561720 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.561986 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.562112 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.563324 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.578539 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.579741 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.580290 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.583235 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.584601 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.585435 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.592279 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.592418 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfl2k\" (UniqueName: \"kubernetes.io/projected/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-kube-api-access-qfl2k\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.594999 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b1eaab-360d-4438-b68d-0d61f21ff593" path="/var/lib/kubelet/pods/23b1eaab-360d-4438-b68d-0d61f21ff593/volumes" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.596313 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ffe05f-9876-4234-9d8f-cc886ffb9a6c" path="/var/lib/kubelet/pods/b0ffe05f-9876-4234-9d8f-cc886ffb9a6c/volumes" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.597375 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.600737 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.600928 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.602738 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.604570 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.606622 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f8dc874e-ea4b-47a5-9f00-d1633fb509ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.635642 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8dc874e-ea4b-47a5-9f00-d1633fb509ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.673141 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6xmcq" Feb 04 11:50:31 crc kubenswrapper[4728]: I0204 11:50:31.681816 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:50:32 crc kubenswrapper[4728]: I0204 11:50:32.004447 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f73f795-7173-4835-b233-b78a4bd41854","Type":"ContainerStarted","Data":"040bc26ea3695a8cc6d710b988e639ab859be72d1625f44431f27de25e677b09"} Feb 04 11:50:32 crc kubenswrapper[4728]: I0204 11:50:32.148022 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 04 11:50:32 crc kubenswrapper[4728]: W0204 11:50:32.154996 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8dc874e_ea4b_47a5_9f00_d1633fb509ba.slice/crio-d8eb3e085b25b06c0a85b66524b4b5bd419441d61aeeb356a5324328380102f6 WatchSource:0}: Error finding container d8eb3e085b25b06c0a85b66524b4b5bd419441d61aeeb356a5324328380102f6: Status 404 returned error can't find the container with id d8eb3e085b25b06c0a85b66524b4b5bd419441d61aeeb356a5324328380102f6 Feb 04 11:50:33 crc kubenswrapper[4728]: I0204 11:50:33.016726 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f73f795-7173-4835-b233-b78a4bd41854","Type":"ContainerStarted","Data":"b44f72df9a51a4b269700f9eeb455d5a011ad3a34ad416db3d3d5614b4955fee"} Feb 04 11:50:33 crc kubenswrapper[4728]: I0204 11:50:33.018064 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f8dc874e-ea4b-47a5-9f00-d1633fb509ba","Type":"ContainerStarted","Data":"d8eb3e085b25b06c0a85b66524b4b5bd419441d61aeeb356a5324328380102f6"} Feb 04 11:50:34 crc kubenswrapper[4728]: I0204 11:50:34.028297 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f8dc874e-ea4b-47a5-9f00-d1633fb509ba","Type":"ContainerStarted","Data":"228c40ce2700b4e25608e8b449fbf03787a1e70779b773f9cf39c7c563ec91cb"} Feb 04 11:50:35 crc kubenswrapper[4728]: I0204 11:50:35.109269 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:35 crc kubenswrapper[4728]: I0204 11:50:35.109609 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:36 crc kubenswrapper[4728]: I0204 11:50:36.166778 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kvltd" podUID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerName="registry-server" probeResult="failure" output=< Feb 04 11:50:36 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 04 11:50:36 crc kubenswrapper[4728]: > Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.010106 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-td6nw"] Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.012345 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.015057 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.022273 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-td6nw"] Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.071992 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-config\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.072224 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.072335 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.072401 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.072479 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.072505 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649rj\" (UniqueName: \"kubernetes.io/projected/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-kube-api-access-649rj\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.072563 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.173876 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.173941 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-config\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.174021 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.174073 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.174095 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.174114 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.174135 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649rj\" (UniqueName: \"kubernetes.io/projected/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-kube-api-access-649rj\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.175032 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.175040 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.175318 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.175345 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.175353 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-config\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.175879 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.206284 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649rj\" (UniqueName: \"kubernetes.io/projected/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-kube-api-access-649rj\") pod \"dnsmasq-dns-7d84b4d45c-td6nw\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.349500 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:37 crc kubenswrapper[4728]: I0204 11:50:37.857820 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-td6nw"] Feb 04 11:50:37 crc kubenswrapper[4728]: W0204 11:50:37.862364 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode74c22b2_61d2_4d72_9b09_a6fc55cf0719.slice/crio-721669364bee9e53aa1157fe13823ade925c471c9527d6bfd642278f4e9fe5a9 WatchSource:0}: Error finding container 721669364bee9e53aa1157fe13823ade925c471c9527d6bfd642278f4e9fe5a9: Status 404 returned error can't find the container with id 721669364bee9e53aa1157fe13823ade925c471c9527d6bfd642278f4e9fe5a9 Feb 04 11:50:38 crc kubenswrapper[4728]: I0204 11:50:38.065604 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" event={"ID":"e74c22b2-61d2-4d72-9b09-a6fc55cf0719","Type":"ContainerStarted","Data":"8826cb08fff982d2afe85dc4f9ce1dbdbdc5631fda94effb32ee637fe308feb8"} Feb 04 11:50:38 crc kubenswrapper[4728]: I0204 11:50:38.065643 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" event={"ID":"e74c22b2-61d2-4d72-9b09-a6fc55cf0719","Type":"ContainerStarted","Data":"721669364bee9e53aa1157fe13823ade925c471c9527d6bfd642278f4e9fe5a9"} Feb 04 11:50:39 crc kubenswrapper[4728]: I0204 11:50:39.075498 4728 generic.go:334] "Generic (PLEG): container finished" podID="e74c22b2-61d2-4d72-9b09-a6fc55cf0719" containerID="8826cb08fff982d2afe85dc4f9ce1dbdbdc5631fda94effb32ee637fe308feb8" exitCode=0 Feb 04 11:50:39 crc kubenswrapper[4728]: I0204 11:50:39.075573 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" event={"ID":"e74c22b2-61d2-4d72-9b09-a6fc55cf0719","Type":"ContainerDied","Data":"8826cb08fff982d2afe85dc4f9ce1dbdbdc5631fda94effb32ee637fe308feb8"} Feb 04 11:50:40 crc kubenswrapper[4728]: I0204 11:50:40.088193 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" event={"ID":"e74c22b2-61d2-4d72-9b09-a6fc55cf0719","Type":"ContainerStarted","Data":"319945817ceefe4b7291e0d9088a0c9230377762d67918464a922577362c6322"} Feb 04 11:50:40 crc kubenswrapper[4728]: I0204 11:50:40.089707 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:40 crc kubenswrapper[4728]: I0204 11:50:40.107520 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" podStartSLOduration=4.107504034 podStartE2EDuration="4.107504034s" podCreationTimestamp="2026-02-04 11:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:50:40.106837938 +0000 UTC m=+1389.249542333" watchObservedRunningTime="2026-02-04 11:50:40.107504034 +0000 UTC m=+1389.250208409" Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.675923 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6tghs"] Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.680393 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.686536 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6tghs"] Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.717192 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-utilities\") pod \"certified-operators-6tghs\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.717276 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-catalog-content\") pod \"certified-operators-6tghs\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.717301 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ghxc\" (UniqueName: \"kubernetes.io/projected/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-kube-api-access-4ghxc\") pod \"certified-operators-6tghs\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.821168 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-catalog-content\") pod \"certified-operators-6tghs\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.821213 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ghxc\" (UniqueName: \"kubernetes.io/projected/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-kube-api-access-4ghxc\") pod \"certified-operators-6tghs\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.821350 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-utilities\") pod \"certified-operators-6tghs\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.822003 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-utilities\") pod \"certified-operators-6tghs\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.822001 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-catalog-content\") pod \"certified-operators-6tghs\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:43 crc kubenswrapper[4728]: I0204 11:50:43.842707 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ghxc\" (UniqueName: \"kubernetes.io/projected/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-kube-api-access-4ghxc\") pod \"certified-operators-6tghs\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:44 crc kubenswrapper[4728]: I0204 11:50:44.016661 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:44 crc kubenswrapper[4728]: I0204 11:50:44.494604 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6tghs"] Feb 04 11:50:44 crc kubenswrapper[4728]: W0204 11:50:44.498947 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0e7a1d3_e7c5_4490_aca5_c4864b8157b9.slice/crio-9d96e5ced04f68c626c7942f166e81e2310ab5b147db287846ff99998ca586dd WatchSource:0}: Error finding container 9d96e5ced04f68c626c7942f166e81e2310ab5b147db287846ff99998ca586dd: Status 404 returned error can't find the container with id 9d96e5ced04f68c626c7942f166e81e2310ab5b147db287846ff99998ca586dd Feb 04 11:50:45 crc kubenswrapper[4728]: I0204 11:50:45.145692 4728 generic.go:334] "Generic (PLEG): container finished" podID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" containerID="f125b08f9f0377707dad628c7027f547a9cb468450463229c22e1d657aabdf84" exitCode=0 Feb 04 11:50:45 crc kubenswrapper[4728]: I0204 11:50:45.145803 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tghs" event={"ID":"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9","Type":"ContainerDied","Data":"f125b08f9f0377707dad628c7027f547a9cb468450463229c22e1d657aabdf84"} Feb 04 11:50:45 crc kubenswrapper[4728]: I0204 11:50:45.145912 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tghs" event={"ID":"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9","Type":"ContainerStarted","Data":"9d96e5ced04f68c626c7942f166e81e2310ab5b147db287846ff99998ca586dd"} Feb 04 11:50:45 crc kubenswrapper[4728]: I0204 11:50:45.155694 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:45 crc kubenswrapper[4728]: I0204 11:50:45.206138 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:45 crc kubenswrapper[4728]: I0204 11:50:45.871738 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r767d"] Feb 04 11:50:45 crc kubenswrapper[4728]: I0204 11:50:45.873841 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:45 crc kubenswrapper[4728]: I0204 11:50:45.901347 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r767d"] Feb 04 11:50:45 crc kubenswrapper[4728]: I0204 11:50:45.965363 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-catalog-content\") pod \"community-operators-r767d\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:45 crc kubenswrapper[4728]: I0204 11:50:45.965531 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrwq\" (UniqueName: \"kubernetes.io/projected/b81466f6-c3d7-4d2b-9f44-74104176f8fb-kube-api-access-mxrwq\") pod \"community-operators-r767d\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:45 crc kubenswrapper[4728]: I0204 11:50:45.965869 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-utilities\") pod \"community-operators-r767d\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:46 crc kubenswrapper[4728]: I0204 11:50:46.067453 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-utilities\") pod \"community-operators-r767d\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:46 crc kubenswrapper[4728]: I0204 11:50:46.067577 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-catalog-content\") pod \"community-operators-r767d\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:46 crc kubenswrapper[4728]: I0204 11:50:46.067618 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxrwq\" (UniqueName: \"kubernetes.io/projected/b81466f6-c3d7-4d2b-9f44-74104176f8fb-kube-api-access-mxrwq\") pod \"community-operators-r767d\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:46 crc kubenswrapper[4728]: I0204 11:50:46.068402 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kvltd"] Feb 04 11:50:46 crc kubenswrapper[4728]: I0204 11:50:46.068402 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-utilities\") pod \"community-operators-r767d\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:46 crc kubenswrapper[4728]: I0204 11:50:46.068414 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-catalog-content\") pod \"community-operators-r767d\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:46 crc kubenswrapper[4728]: I0204 11:50:46.090146 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxrwq\" (UniqueName: \"kubernetes.io/projected/b81466f6-c3d7-4d2b-9f44-74104176f8fb-kube-api-access-mxrwq\") pod \"community-operators-r767d\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:46 crc kubenswrapper[4728]: I0204 11:50:46.209260 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:46 crc kubenswrapper[4728]: I0204 11:50:46.726142 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r767d"] Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.171753 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tghs" event={"ID":"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9","Type":"ContainerStarted","Data":"7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663"} Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.174445 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r767d" event={"ID":"b81466f6-c3d7-4d2b-9f44-74104176f8fb","Type":"ContainerStarted","Data":"be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b"} Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.174863 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r767d" event={"ID":"b81466f6-c3d7-4d2b-9f44-74104176f8fb","Type":"ContainerStarted","Data":"7c72c2ce070c245073926b085555294449ac2edb619f67be45aa8c6ac6be4333"} Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.174630 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kvltd" podUID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerName="registry-server" containerID="cri-o://19e9fdf44a296f2413ace72a5f69b981a1947a8506d6c27eef7c257d8da840c6" gracePeriod=2 Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.351033 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.475103 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw"] Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.475504 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" podUID="1c65630e-c4d5-43d3-89c5-7e5a62951230" containerName="dnsmasq-dns" containerID="cri-o://00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d" gracePeriod=10 Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.679720 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-5zqtn"] Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.699679 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.743343 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-5zqtn"] Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.820465 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.820798 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.821339 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djrcg\" (UniqueName: \"kubernetes.io/projected/8a7c3943-584c-4f0f-ad10-4030ee23df91-kube-api-access-djrcg\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.821361 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.821506 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-config\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.821528 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.821595 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.922905 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-config\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.922954 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.922991 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.923080 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.923120 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.923180 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djrcg\" (UniqueName: \"kubernetes.io/projected/8a7c3943-584c-4f0f-ad10-4030ee23df91-kube-api-access-djrcg\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.923214 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.923726 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-config\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.924068 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.924328 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.925010 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.925173 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.925522 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8a7c3943-584c-4f0f-ad10-4030ee23df91-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:47 crc kubenswrapper[4728]: I0204 11:50:47.956008 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djrcg\" (UniqueName: \"kubernetes.io/projected/8a7c3943-584c-4f0f-ad10-4030ee23df91-kube-api-access-djrcg\") pod \"dnsmasq-dns-6f6df4f56c-5zqtn\" (UID: \"8a7c3943-584c-4f0f-ad10-4030ee23df91\") " pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.022505 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.034588 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.126123 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-svc\") pod \"1c65630e-c4d5-43d3-89c5-7e5a62951230\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.126266 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-sb\") pod \"1c65630e-c4d5-43d3-89c5-7e5a62951230\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.126323 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-nb\") pod \"1c65630e-c4d5-43d3-89c5-7e5a62951230\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.126466 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-swift-storage-0\") pod \"1c65630e-c4d5-43d3-89c5-7e5a62951230\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.126584 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5shr\" (UniqueName: \"kubernetes.io/projected/1c65630e-c4d5-43d3-89c5-7e5a62951230-kube-api-access-r5shr\") pod \"1c65630e-c4d5-43d3-89c5-7e5a62951230\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.126626 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-config\") pod \"1c65630e-c4d5-43d3-89c5-7e5a62951230\" (UID: \"1c65630e-c4d5-43d3-89c5-7e5a62951230\") " Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.135143 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c65630e-c4d5-43d3-89c5-7e5a62951230-kube-api-access-r5shr" (OuterVolumeSpecName: "kube-api-access-r5shr") pod "1c65630e-c4d5-43d3-89c5-7e5a62951230" (UID: "1c65630e-c4d5-43d3-89c5-7e5a62951230"). InnerVolumeSpecName "kube-api-access-r5shr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.194545 4728 generic.go:334] "Generic (PLEG): container finished" podID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerID="19e9fdf44a296f2413ace72a5f69b981a1947a8506d6c27eef7c257d8da840c6" exitCode=0 Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.194665 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvltd" event={"ID":"2d51cc6e-76e2-4865-9679-4385711b8e0a","Type":"ContainerDied","Data":"19e9fdf44a296f2413ace72a5f69b981a1947a8506d6c27eef7c257d8da840c6"} Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.199615 4728 generic.go:334] "Generic (PLEG): container finished" podID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" containerID="be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b" exitCode=0 Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.199674 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r767d" event={"ID":"b81466f6-c3d7-4d2b-9f44-74104176f8fb","Type":"ContainerDied","Data":"be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b"} Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.217076 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c65630e-c4d5-43d3-89c5-7e5a62951230" (UID: "1c65630e-c4d5-43d3-89c5-7e5a62951230"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.217112 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" event={"ID":"1c65630e-c4d5-43d3-89c5-7e5a62951230","Type":"ContainerDied","Data":"00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d"} Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.217081 4728 generic.go:334] "Generic (PLEG): container finished" podID="1c65630e-c4d5-43d3-89c5-7e5a62951230" containerID="00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d" exitCode=0 Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.217155 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" event={"ID":"1c65630e-c4d5-43d3-89c5-7e5a62951230","Type":"ContainerDied","Data":"8a0c1c6410499fdcb20f6629bdf1942855b92c34f1e14a45cfc0961c789aa8b6"} Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.217174 4728 scope.go:117] "RemoveContainer" containerID="00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.217199 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.220817 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-config" (OuterVolumeSpecName: "config") pod "1c65630e-c4d5-43d3-89c5-7e5a62951230" (UID: "1c65630e-c4d5-43d3-89c5-7e5a62951230"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.227707 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c65630e-c4d5-43d3-89c5-7e5a62951230" (UID: "1c65630e-c4d5-43d3-89c5-7e5a62951230"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.231446 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.231488 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5shr\" (UniqueName: \"kubernetes.io/projected/1c65630e-c4d5-43d3-89c5-7e5a62951230-kube-api-access-r5shr\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.231502 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.231515 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.257326 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c65630e-c4d5-43d3-89c5-7e5a62951230" (UID: "1c65630e-c4d5-43d3-89c5-7e5a62951230"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.263450 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c65630e-c4d5-43d3-89c5-7e5a62951230" (UID: "1c65630e-c4d5-43d3-89c5-7e5a62951230"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.333655 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.333686 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c65630e-c4d5-43d3-89c5-7e5a62951230-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.345386 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.357296 4728 scope.go:117] "RemoveContainer" containerID="d44a16edb96af2c073cfa33aa3b0c24ecedc20a529125da10e5892ae9798d6b7" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.395713 4728 scope.go:117] "RemoveContainer" containerID="00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d" Feb 04 11:50:48 crc kubenswrapper[4728]: E0204 11:50:48.396149 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d\": container with ID starting with 00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d not found: ID does not exist" containerID="00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.396186 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d"} err="failed to get container status \"00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d\": rpc error: code = NotFound desc = could not find container \"00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d\": container with ID starting with 00afc9ccf4daa75902a463feab2e072f935b03ed935d3510c6ac1c2bcc8fc46d not found: ID does not exist" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.396213 4728 scope.go:117] "RemoveContainer" containerID="d44a16edb96af2c073cfa33aa3b0c24ecedc20a529125da10e5892ae9798d6b7" Feb 04 11:50:48 crc kubenswrapper[4728]: E0204 11:50:48.398276 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d44a16edb96af2c073cfa33aa3b0c24ecedc20a529125da10e5892ae9798d6b7\": container with ID starting with d44a16edb96af2c073cfa33aa3b0c24ecedc20a529125da10e5892ae9798d6b7 not found: ID does not exist" containerID="d44a16edb96af2c073cfa33aa3b0c24ecedc20a529125da10e5892ae9798d6b7" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.398327 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44a16edb96af2c073cfa33aa3b0c24ecedc20a529125da10e5892ae9798d6b7"} err="failed to get container status \"d44a16edb96af2c073cfa33aa3b0c24ecedc20a529125da10e5892ae9798d6b7\": rpc error: code = NotFound desc = could not find container \"d44a16edb96af2c073cfa33aa3b0c24ecedc20a529125da10e5892ae9798d6b7\": container with ID starting with d44a16edb96af2c073cfa33aa3b0c24ecedc20a529125da10e5892ae9798d6b7 not found: ID does not exist" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.434400 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-utilities\") pod \"2d51cc6e-76e2-4865-9679-4385711b8e0a\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.434544 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pfsn\" (UniqueName: \"kubernetes.io/projected/2d51cc6e-76e2-4865-9679-4385711b8e0a-kube-api-access-2pfsn\") pod \"2d51cc6e-76e2-4865-9679-4385711b8e0a\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.434608 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-catalog-content\") pod \"2d51cc6e-76e2-4865-9679-4385711b8e0a\" (UID: \"2d51cc6e-76e2-4865-9679-4385711b8e0a\") " Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.435281 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-utilities" (OuterVolumeSpecName: "utilities") pod "2d51cc6e-76e2-4865-9679-4385711b8e0a" (UID: "2d51cc6e-76e2-4865-9679-4385711b8e0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.439023 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d51cc6e-76e2-4865-9679-4385711b8e0a-kube-api-access-2pfsn" (OuterVolumeSpecName: "kube-api-access-2pfsn") pod "2d51cc6e-76e2-4865-9679-4385711b8e0a" (UID: "2d51cc6e-76e2-4865-9679-4385711b8e0a"). InnerVolumeSpecName "kube-api-access-2pfsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.497239 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-5zqtn"] Feb 04 11:50:48 crc kubenswrapper[4728]: W0204 11:50:48.500281 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a7c3943_584c_4f0f_ad10_4030ee23df91.slice/crio-b45c7642fb3c09ac70048eedf7b210c54dda8d919fc8755df832fc14a7ca942b WatchSource:0}: Error finding container b45c7642fb3c09ac70048eedf7b210c54dda8d919fc8755df832fc14a7ca942b: Status 404 returned error can't find the container with id b45c7642fb3c09ac70048eedf7b210c54dda8d919fc8755df832fc14a7ca942b Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.541519 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.546240 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pfsn\" (UniqueName: \"kubernetes.io/projected/2d51cc6e-76e2-4865-9679-4385711b8e0a-kube-api-access-2pfsn\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.546302 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d51cc6e-76e2-4865-9679-4385711b8e0a" (UID: "2d51cc6e-76e2-4865-9679-4385711b8e0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.557858 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw"] Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.566430 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-qzfrw"] Feb 04 11:50:48 crc kubenswrapper[4728]: I0204 11:50:48.648049 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d51cc6e-76e2-4865-9679-4385711b8e0a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.227245 4728 generic.go:334] "Generic (PLEG): container finished" podID="8a7c3943-584c-4f0f-ad10-4030ee23df91" containerID="80e810749ba6ee69f2fc50a9c100b0539eaa41a07e61e92618ef025959ba7c38" exitCode=0 Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.227353 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" event={"ID":"8a7c3943-584c-4f0f-ad10-4030ee23df91","Type":"ContainerDied","Data":"80e810749ba6ee69f2fc50a9c100b0539eaa41a07e61e92618ef025959ba7c38"} Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.227387 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" event={"ID":"8a7c3943-584c-4f0f-ad10-4030ee23df91","Type":"ContainerStarted","Data":"b45c7642fb3c09ac70048eedf7b210c54dda8d919fc8755df832fc14a7ca942b"} Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.229509 4728 generic.go:334] "Generic (PLEG): container finished" podID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" containerID="7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663" exitCode=0 Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.229573 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tghs" event={"ID":"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9","Type":"ContainerDied","Data":"7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663"} Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.236053 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kvltd" event={"ID":"2d51cc6e-76e2-4865-9679-4385711b8e0a","Type":"ContainerDied","Data":"43903361515e49e963c5d5b254550773f079917b1b276e74f7735cd3dde8792e"} Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.236347 4728 scope.go:117] "RemoveContainer" containerID="19e9fdf44a296f2413ace72a5f69b981a1947a8506d6c27eef7c257d8da840c6" Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.236452 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kvltd" Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.246038 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r767d" event={"ID":"b81466f6-c3d7-4d2b-9f44-74104176f8fb","Type":"ContainerStarted","Data":"dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4"} Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.484974 4728 scope.go:117] "RemoveContainer" containerID="11a6481ed752aec9590f109637d01270ae6ca89b1106e7f4c74109b6336e5ece" Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.513921 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kvltd"] Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.527644 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kvltd"] Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.529152 4728 scope.go:117] "RemoveContainer" containerID="c15ea578bcdde4847e18ef5a32a58584a8e98a6e59a8d40cb7200bc98220139a" Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.567920 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c65630e-c4d5-43d3-89c5-7e5a62951230" path="/var/lib/kubelet/pods/1c65630e-c4d5-43d3-89c5-7e5a62951230/volumes" Feb 04 11:50:49 crc kubenswrapper[4728]: I0204 11:50:49.568709 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d51cc6e-76e2-4865-9679-4385711b8e0a" path="/var/lib/kubelet/pods/2d51cc6e-76e2-4865-9679-4385711b8e0a/volumes" Feb 04 11:50:50 crc kubenswrapper[4728]: I0204 11:50:50.260679 4728 generic.go:334] "Generic (PLEG): container finished" podID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" containerID="dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4" exitCode=0 Feb 04 11:50:50 crc kubenswrapper[4728]: I0204 11:50:50.260725 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r767d" event={"ID":"b81466f6-c3d7-4d2b-9f44-74104176f8fb","Type":"ContainerDied","Data":"dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4"} Feb 04 11:50:50 crc kubenswrapper[4728]: I0204 11:50:50.264190 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" event={"ID":"8a7c3943-584c-4f0f-ad10-4030ee23df91","Type":"ContainerStarted","Data":"20aa652e4bf8f00adadffda1d9ca6973e6949a8d23b118a44994bd13d70ebb8c"} Feb 04 11:50:50 crc kubenswrapper[4728]: I0204 11:50:50.264314 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:50 crc kubenswrapper[4728]: I0204 11:50:50.266951 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tghs" event={"ID":"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9","Type":"ContainerStarted","Data":"ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3"} Feb 04 11:50:50 crc kubenswrapper[4728]: I0204 11:50:50.301356 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6tghs" podStartSLOduration=2.763211252 podStartE2EDuration="7.30133538s" podCreationTimestamp="2026-02-04 11:50:43 +0000 UTC" firstStartedPulling="2026-02-04 11:50:45.147906206 +0000 UTC m=+1394.290610591" lastFinishedPulling="2026-02-04 11:50:49.686030324 +0000 UTC m=+1398.828734719" observedRunningTime="2026-02-04 11:50:50.296070796 +0000 UTC m=+1399.438775211" watchObservedRunningTime="2026-02-04 11:50:50.30133538 +0000 UTC m=+1399.444039765" Feb 04 11:50:50 crc kubenswrapper[4728]: I0204 11:50:50.318735 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" podStartSLOduration=3.318714772 podStartE2EDuration="3.318714772s" podCreationTimestamp="2026-02-04 11:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:50:50.315451824 +0000 UTC m=+1399.458156219" watchObservedRunningTime="2026-02-04 11:50:50.318714772 +0000 UTC m=+1399.461419177" Feb 04 11:50:51 crc kubenswrapper[4728]: I0204 11:50:51.279466 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r767d" event={"ID":"b81466f6-c3d7-4d2b-9f44-74104176f8fb","Type":"ContainerStarted","Data":"8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90"} Feb 04 11:50:51 crc kubenswrapper[4728]: I0204 11:50:51.302251 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r767d" podStartSLOduration=3.8769702969999997 podStartE2EDuration="6.302228713s" podCreationTimestamp="2026-02-04 11:50:45 +0000 UTC" firstStartedPulling="2026-02-04 11:50:48.209277122 +0000 UTC m=+1397.351981507" lastFinishedPulling="2026-02-04 11:50:50.634535538 +0000 UTC m=+1399.777239923" observedRunningTime="2026-02-04 11:50:51.296938687 +0000 UTC m=+1400.439643092" watchObservedRunningTime="2026-02-04 11:50:51.302228713 +0000 UTC m=+1400.444933098" Feb 04 11:50:54 crc kubenswrapper[4728]: I0204 11:50:54.016828 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:54 crc kubenswrapper[4728]: I0204 11:50:54.017160 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:54 crc kubenswrapper[4728]: I0204 11:50:54.075664 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:54 crc kubenswrapper[4728]: I0204 11:50:54.359853 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:55 crc kubenswrapper[4728]: I0204 11:50:55.663543 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6tghs"] Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.210372 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.210441 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.263515 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.328918 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6tghs" podUID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" containerName="registry-server" containerID="cri-o://ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3" gracePeriod=2 Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.372426 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.822212 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.925874 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-utilities\") pod \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.925926 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-catalog-content\") pod \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.925960 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ghxc\" (UniqueName: \"kubernetes.io/projected/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-kube-api-access-4ghxc\") pod \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\" (UID: \"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9\") " Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.926658 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-utilities" (OuterVolumeSpecName: "utilities") pod "f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" (UID: "f0e7a1d3-e7c5-4490-aca5-c4864b8157b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.933954 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-kube-api-access-4ghxc" (OuterVolumeSpecName: "kube-api-access-4ghxc") pod "f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" (UID: "f0e7a1d3-e7c5-4490-aca5-c4864b8157b9"). InnerVolumeSpecName "kube-api-access-4ghxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:50:56 crc kubenswrapper[4728]: I0204 11:50:56.984256 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" (UID: "f0e7a1d3-e7c5-4490-aca5-c4864b8157b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.028673 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.028712 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ghxc\" (UniqueName: \"kubernetes.io/projected/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-kube-api-access-4ghxc\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.028726 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.341175 4728 generic.go:334] "Generic (PLEG): container finished" podID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" containerID="ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3" exitCode=0 Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.341236 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6tghs" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.341255 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tghs" event={"ID":"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9","Type":"ContainerDied","Data":"ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3"} Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.341403 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6tghs" event={"ID":"f0e7a1d3-e7c5-4490-aca5-c4864b8157b9","Type":"ContainerDied","Data":"9d96e5ced04f68c626c7942f166e81e2310ab5b147db287846ff99998ca586dd"} Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.341441 4728 scope.go:117] "RemoveContainer" containerID="ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.384091 4728 scope.go:117] "RemoveContainer" containerID="7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.392993 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6tghs"] Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.403105 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6tghs"] Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.410915 4728 scope.go:117] "RemoveContainer" containerID="f125b08f9f0377707dad628c7027f547a9cb468450463229c22e1d657aabdf84" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.478219 4728 scope.go:117] "RemoveContainer" containerID="ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3" Feb 04 11:50:57 crc kubenswrapper[4728]: E0204 11:50:57.478860 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3\": container with ID starting with ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3 not found: ID does not exist" containerID="ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.478915 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3"} err="failed to get container status \"ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3\": rpc error: code = NotFound desc = could not find container \"ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3\": container with ID starting with ac77a2a242a5c4da17b9daf5d8bd8809a1de58623c251357fd3d50dda00f71c3 not found: ID does not exist" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.478948 4728 scope.go:117] "RemoveContainer" containerID="7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663" Feb 04 11:50:57 crc kubenswrapper[4728]: E0204 11:50:57.479267 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663\": container with ID starting with 7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663 not found: ID does not exist" containerID="7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.479333 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663"} err="failed to get container status \"7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663\": rpc error: code = NotFound desc = could not find container \"7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663\": container with ID starting with 7a043f2b3ad0b8090f53890ec34cd13104e11b64dda5059297837d1ad4008663 not found: ID does not exist" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.479359 4728 scope.go:117] "RemoveContainer" containerID="f125b08f9f0377707dad628c7027f547a9cb468450463229c22e1d657aabdf84" Feb 04 11:50:57 crc kubenswrapper[4728]: E0204 11:50:57.479620 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f125b08f9f0377707dad628c7027f547a9cb468450463229c22e1d657aabdf84\": container with ID starting with f125b08f9f0377707dad628c7027f547a9cb468450463229c22e1d657aabdf84 not found: ID does not exist" containerID="f125b08f9f0377707dad628c7027f547a9cb468450463229c22e1d657aabdf84" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.479663 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f125b08f9f0377707dad628c7027f547a9cb468450463229c22e1d657aabdf84"} err="failed to get container status \"f125b08f9f0377707dad628c7027f547a9cb468450463229c22e1d657aabdf84\": rpc error: code = NotFound desc = could not find container \"f125b08f9f0377707dad628c7027f547a9cb468450463229c22e1d657aabdf84\": container with ID starting with f125b08f9f0377707dad628c7027f547a9cb468450463229c22e1d657aabdf84 not found: ID does not exist" Feb 04 11:50:57 crc kubenswrapper[4728]: I0204 11:50:57.564306 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" path="/var/lib/kubelet/pods/f0e7a1d3-e7c5-4490-aca5-c4864b8157b9/volumes" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.037037 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-5zqtn" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.124484 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-td6nw"] Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.124828 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" podUID="e74c22b2-61d2-4d72-9b09-a6fc55cf0719" containerName="dnsmasq-dns" containerID="cri-o://319945817ceefe4b7291e0d9088a0c9230377762d67918464a922577362c6322" gracePeriod=10 Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.362604 4728 generic.go:334] "Generic (PLEG): container finished" podID="e74c22b2-61d2-4d72-9b09-a6fc55cf0719" containerID="319945817ceefe4b7291e0d9088a0c9230377762d67918464a922577362c6322" exitCode=0 Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.362681 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" event={"ID":"e74c22b2-61d2-4d72-9b09-a6fc55cf0719","Type":"ContainerDied","Data":"319945817ceefe4b7291e0d9088a0c9230377762d67918464a922577362c6322"} Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.666737 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r767d"] Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.667622 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r767d" podUID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" containerName="registry-server" containerID="cri-o://8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90" gracePeriod=2 Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.748513 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.861211 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-config\") pod \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.861260 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-svc\") pod \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.861282 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-sb\") pod \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.861320 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-649rj\" (UniqueName: \"kubernetes.io/projected/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-kube-api-access-649rj\") pod \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.861356 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-nb\") pod \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.861410 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-openstack-edpm-ipam\") pod \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.861455 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-swift-storage-0\") pod \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\" (UID: \"e74c22b2-61d2-4d72-9b09-a6fc55cf0719\") " Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.874578 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-kube-api-access-649rj" (OuterVolumeSpecName: "kube-api-access-649rj") pod "e74c22b2-61d2-4d72-9b09-a6fc55cf0719" (UID: "e74c22b2-61d2-4d72-9b09-a6fc55cf0719"). InnerVolumeSpecName "kube-api-access-649rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.943620 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e74c22b2-61d2-4d72-9b09-a6fc55cf0719" (UID: "e74c22b2-61d2-4d72-9b09-a6fc55cf0719"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.948234 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e74c22b2-61d2-4d72-9b09-a6fc55cf0719" (UID: "e74c22b2-61d2-4d72-9b09-a6fc55cf0719"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.950318 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e74c22b2-61d2-4d72-9b09-a6fc55cf0719" (UID: "e74c22b2-61d2-4d72-9b09-a6fc55cf0719"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.950470 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-config" (OuterVolumeSpecName: "config") pod "e74c22b2-61d2-4d72-9b09-a6fc55cf0719" (UID: "e74c22b2-61d2-4d72-9b09-a6fc55cf0719"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.961465 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "e74c22b2-61d2-4d72-9b09-a6fc55cf0719" (UID: "e74c22b2-61d2-4d72-9b09-a6fc55cf0719"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.963795 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-config\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.963823 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.963836 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.963849 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-649rj\" (UniqueName: \"kubernetes.io/projected/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-kube-api-access-649rj\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.963860 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.963873 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:58 crc kubenswrapper[4728]: I0204 11:50:58.985148 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e74c22b2-61d2-4d72-9b09-a6fc55cf0719" (UID: "e74c22b2-61d2-4d72-9b09-a6fc55cf0719"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.063832 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.065156 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e74c22b2-61d2-4d72-9b09-a6fc55cf0719-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.166117 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-catalog-content\") pod \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.166219 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-utilities\") pod \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.166283 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxrwq\" (UniqueName: \"kubernetes.io/projected/b81466f6-c3d7-4d2b-9f44-74104176f8fb-kube-api-access-mxrwq\") pod \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\" (UID: \"b81466f6-c3d7-4d2b-9f44-74104176f8fb\") " Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.167142 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-utilities" (OuterVolumeSpecName: "utilities") pod "b81466f6-c3d7-4d2b-9f44-74104176f8fb" (UID: "b81466f6-c3d7-4d2b-9f44-74104176f8fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.170230 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81466f6-c3d7-4d2b-9f44-74104176f8fb-kube-api-access-mxrwq" (OuterVolumeSpecName: "kube-api-access-mxrwq") pod "b81466f6-c3d7-4d2b-9f44-74104176f8fb" (UID: "b81466f6-c3d7-4d2b-9f44-74104176f8fb"). InnerVolumeSpecName "kube-api-access-mxrwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.225715 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b81466f6-c3d7-4d2b-9f44-74104176f8fb" (UID: "b81466f6-c3d7-4d2b-9f44-74104176f8fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.303787 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxrwq\" (UniqueName: \"kubernetes.io/projected/b81466f6-c3d7-4d2b-9f44-74104176f8fb-kube-api-access-mxrwq\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.303837 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.303851 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b81466f6-c3d7-4d2b-9f44-74104176f8fb-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.380409 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" event={"ID":"e74c22b2-61d2-4d72-9b09-a6fc55cf0719","Type":"ContainerDied","Data":"721669364bee9e53aa1157fe13823ade925c471c9527d6bfd642278f4e9fe5a9"} Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.380462 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-td6nw" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.380496 4728 scope.go:117] "RemoveContainer" containerID="319945817ceefe4b7291e0d9088a0c9230377762d67918464a922577362c6322" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.384131 4728 generic.go:334] "Generic (PLEG): container finished" podID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" containerID="8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90" exitCode=0 Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.384180 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r767d" event={"ID":"b81466f6-c3d7-4d2b-9f44-74104176f8fb","Type":"ContainerDied","Data":"8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90"} Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.384210 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r767d" event={"ID":"b81466f6-c3d7-4d2b-9f44-74104176f8fb","Type":"ContainerDied","Data":"7c72c2ce070c245073926b085555294449ac2edb619f67be45aa8c6ac6be4333"} Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.384222 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r767d" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.419408 4728 scope.go:117] "RemoveContainer" containerID="8826cb08fff982d2afe85dc4f9ce1dbdbdc5631fda94effb32ee637fe308feb8" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.426928 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r767d"] Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.436185 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r767d"] Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.445778 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-td6nw"] Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.454985 4728 scope.go:117] "RemoveContainer" containerID="8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.464338 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-td6nw"] Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.478373 4728 scope.go:117] "RemoveContainer" containerID="dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.508450 4728 scope.go:117] "RemoveContainer" containerID="be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.525500 4728 scope.go:117] "RemoveContainer" containerID="8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90" Feb 04 11:50:59 crc kubenswrapper[4728]: E0204 11:50:59.526136 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90\": container with ID starting with 8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90 not found: ID does not exist" containerID="8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.526176 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90"} err="failed to get container status \"8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90\": rpc error: code = NotFound desc = could not find container \"8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90\": container with ID starting with 8c32939e873ce1c82365480b57407969b6aa432ce6ae2a72eedd0b02cc872f90 not found: ID does not exist" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.526210 4728 scope.go:117] "RemoveContainer" containerID="dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4" Feb 04 11:50:59 crc kubenswrapper[4728]: E0204 11:50:59.526618 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4\": container with ID starting with dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4 not found: ID does not exist" containerID="dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.526647 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4"} err="failed to get container status \"dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4\": rpc error: code = NotFound desc = could not find container \"dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4\": container with ID starting with dfbb035261bedd244f969e2679faa3f57bdbef951e51f9e3a5d3a0c8980779a4 not found: ID does not exist" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.526664 4728 scope.go:117] "RemoveContainer" containerID="be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b" Feb 04 11:50:59 crc kubenswrapper[4728]: E0204 11:50:59.526990 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b\": container with ID starting with be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b not found: ID does not exist" containerID="be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.527036 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b"} err="failed to get container status \"be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b\": rpc error: code = NotFound desc = could not find container \"be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b\": container with ID starting with be494c753feddd1b8dc309a7270b166c41d7fc7374892b031be203ff91328b4b not found: ID does not exist" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.568676 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" path="/var/lib/kubelet/pods/b81466f6-c3d7-4d2b-9f44-74104176f8fb/volumes" Feb 04 11:50:59 crc kubenswrapper[4728]: I0204 11:50:59.569681 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e74c22b2-61d2-4d72-9b09-a6fc55cf0719" path="/var/lib/kubelet/pods/e74c22b2-61d2-4d72-9b09-a6fc55cf0719/volumes" Feb 04 11:51:05 crc kubenswrapper[4728]: I0204 11:51:05.442386 4728 generic.go:334] "Generic (PLEG): container finished" podID="2f73f795-7173-4835-b233-b78a4bd41854" containerID="b44f72df9a51a4b269700f9eeb455d5a011ad3a34ad416db3d3d5614b4955fee" exitCode=0 Feb 04 11:51:05 crc kubenswrapper[4728]: I0204 11:51:05.442873 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f73f795-7173-4835-b233-b78a4bd41854","Type":"ContainerDied","Data":"b44f72df9a51a4b269700f9eeb455d5a011ad3a34ad416db3d3d5614b4955fee"} Feb 04 11:51:06 crc kubenswrapper[4728]: I0204 11:51:06.455943 4728 generic.go:334] "Generic (PLEG): container finished" podID="f8dc874e-ea4b-47a5-9f00-d1633fb509ba" containerID="228c40ce2700b4e25608e8b449fbf03787a1e70779b773f9cf39c7c563ec91cb" exitCode=0 Feb 04 11:51:06 crc kubenswrapper[4728]: I0204 11:51:06.456135 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f8dc874e-ea4b-47a5-9f00-d1633fb509ba","Type":"ContainerDied","Data":"228c40ce2700b4e25608e8b449fbf03787a1e70779b773f9cf39c7c563ec91cb"} Feb 04 11:51:06 crc kubenswrapper[4728]: I0204 11:51:06.460101 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f73f795-7173-4835-b233-b78a4bd41854","Type":"ContainerStarted","Data":"0dbbb197d1b72f99a744588f25457108de844f7266a6c038ef7b661f353b072e"} Feb 04 11:51:06 crc kubenswrapper[4728]: I0204 11:51:06.461072 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 04 11:51:06 crc kubenswrapper[4728]: I0204 11:51:06.522092 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.522071717 podStartE2EDuration="36.522071717s" podCreationTimestamp="2026-02-04 11:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:51:06.510528454 +0000 UTC m=+1415.653232849" watchObservedRunningTime="2026-02-04 11:51:06.522071717 +0000 UTC m=+1415.664776102" Feb 04 11:51:07 crc kubenswrapper[4728]: I0204 11:51:07.470943 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f8dc874e-ea4b-47a5-9f00-d1633fb509ba","Type":"ContainerStarted","Data":"43dbf248ab52e4c67a237cb4e7b615d62f17c550697f980afd29a21e65defb5f"} Feb 04 11:51:07 crc kubenswrapper[4728]: I0204 11:51:07.471474 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:51:07 crc kubenswrapper[4728]: I0204 11:51:07.501228 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.501205225 podStartE2EDuration="36.501205225s" podCreationTimestamp="2026-02-04 11:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 11:51:07.494451425 +0000 UTC m=+1416.637155800" watchObservedRunningTime="2026-02-04 11:51:07.501205225 +0000 UTC m=+1416.643909620" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.832805 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6rqtl"] Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833614 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" containerName="extract-utilities" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833625 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" containerName="extract-utilities" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833637 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74c22b2-61d2-4d72-9b09-a6fc55cf0719" containerName="init" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833643 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74c22b2-61d2-4d72-9b09-a6fc55cf0719" containerName="init" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833652 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c65630e-c4d5-43d3-89c5-7e5a62951230" containerName="init" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833660 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c65630e-c4d5-43d3-89c5-7e5a62951230" containerName="init" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833672 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c65630e-c4d5-43d3-89c5-7e5a62951230" containerName="dnsmasq-dns" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833679 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c65630e-c4d5-43d3-89c5-7e5a62951230" containerName="dnsmasq-dns" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833689 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerName="registry-server" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833695 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerName="registry-server" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833701 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" containerName="extract-content" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833707 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" containerName="extract-content" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833727 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" containerName="registry-server" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833733 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" containerName="registry-server" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833739 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerName="extract-content" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833744 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerName="extract-content" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833828 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" containerName="registry-server" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833834 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" containerName="registry-server" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833848 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerName="extract-utilities" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833853 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerName="extract-utilities" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833863 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" containerName="extract-content" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833868 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" containerName="extract-content" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833879 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74c22b2-61d2-4d72-9b09-a6fc55cf0719" containerName="dnsmasq-dns" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833885 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74c22b2-61d2-4d72-9b09-a6fc55cf0719" containerName="dnsmasq-dns" Feb 04 11:51:10 crc kubenswrapper[4728]: E0204 11:51:10.833892 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" containerName="extract-utilities" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.833898 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" containerName="extract-utilities" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.834048 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d51cc6e-76e2-4865-9679-4385711b8e0a" containerName="registry-server" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.834073 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e74c22b2-61d2-4d72-9b09-a6fc55cf0719" containerName="dnsmasq-dns" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.834080 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c65630e-c4d5-43d3-89c5-7e5a62951230" containerName="dnsmasq-dns" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.834088 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e7a1d3-e7c5-4490-aca5-c4864b8157b9" containerName="registry-server" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.834100 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81466f6-c3d7-4d2b-9f44-74104176f8fb" containerName="registry-server" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.835463 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.846315 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rqtl"] Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.916129 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-utilities\") pod \"redhat-marketplace-6rqtl\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.916184 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-catalog-content\") pod \"redhat-marketplace-6rqtl\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:10 crc kubenswrapper[4728]: I0204 11:51:10.916237 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twnvl\" (UniqueName: \"kubernetes.io/projected/a6865455-5888-4a57-a150-133eb3ffa9fe-kube-api-access-twnvl\") pod \"redhat-marketplace-6rqtl\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.018446 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-utilities\") pod \"redhat-marketplace-6rqtl\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.018499 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-catalog-content\") pod \"redhat-marketplace-6rqtl\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.018531 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twnvl\" (UniqueName: \"kubernetes.io/projected/a6865455-5888-4a57-a150-133eb3ffa9fe-kube-api-access-twnvl\") pod \"redhat-marketplace-6rqtl\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.019072 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-utilities\") pod \"redhat-marketplace-6rqtl\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.019100 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-catalog-content\") pod \"redhat-marketplace-6rqtl\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.053627 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twnvl\" (UniqueName: \"kubernetes.io/projected/a6865455-5888-4a57-a150-133eb3ffa9fe-kube-api-access-twnvl\") pod \"redhat-marketplace-6rqtl\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.069910 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt"] Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.071041 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.073069 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.074159 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.074242 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.074262 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.095655 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt"] Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.120782 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljbfq\" (UniqueName: \"kubernetes.io/projected/8e03d68e-aab9-4abb-969e-649efb0dc80a-kube-api-access-ljbfq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.120865 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.120917 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.120968 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.151565 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.222350 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.222437 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.222607 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljbfq\" (UniqueName: \"kubernetes.io/projected/8e03d68e-aab9-4abb-969e-649efb0dc80a-kube-api-access-ljbfq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.222666 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.229628 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.231381 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.236377 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.250778 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljbfq\" (UniqueName: \"kubernetes.io/projected/8e03d68e-aab9-4abb-969e-649efb0dc80a-kube-api-access-ljbfq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.451686 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:11 crc kubenswrapper[4728]: I0204 11:51:11.700401 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rqtl"] Feb 04 11:51:12 crc kubenswrapper[4728]: W0204 11:51:12.075743 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e03d68e_aab9_4abb_969e_649efb0dc80a.slice/crio-b9572bb08057f0b9eccd092a05aaae57b310d611190916bc5a44b1437388b8b7 WatchSource:0}: Error finding container b9572bb08057f0b9eccd092a05aaae57b310d611190916bc5a44b1437388b8b7: Status 404 returned error can't find the container with id b9572bb08057f0b9eccd092a05aaae57b310d611190916bc5a44b1437388b8b7 Feb 04 11:51:12 crc kubenswrapper[4728]: I0204 11:51:12.077533 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt"] Feb 04 11:51:12 crc kubenswrapper[4728]: I0204 11:51:12.523523 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" event={"ID":"8e03d68e-aab9-4abb-969e-649efb0dc80a","Type":"ContainerStarted","Data":"b9572bb08057f0b9eccd092a05aaae57b310d611190916bc5a44b1437388b8b7"} Feb 04 11:51:12 crc kubenswrapper[4728]: I0204 11:51:12.524937 4728 generic.go:334] "Generic (PLEG): container finished" podID="a6865455-5888-4a57-a150-133eb3ffa9fe" containerID="4bedbdf102ab1ad6002ed1b84c08e8daa87b5e41b7f77df24d2ee9dae682beb6" exitCode=0 Feb 04 11:51:12 crc kubenswrapper[4728]: I0204 11:51:12.524979 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rqtl" event={"ID":"a6865455-5888-4a57-a150-133eb3ffa9fe","Type":"ContainerDied","Data":"4bedbdf102ab1ad6002ed1b84c08e8daa87b5e41b7f77df24d2ee9dae682beb6"} Feb 04 11:51:12 crc kubenswrapper[4728]: I0204 11:51:12.525003 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rqtl" event={"ID":"a6865455-5888-4a57-a150-133eb3ffa9fe","Type":"ContainerStarted","Data":"a53f022ae42d34cb6926df2e69ff97b4a5b9def6eaacd66a73adf0ba688f4707"} Feb 04 11:51:14 crc kubenswrapper[4728]: I0204 11:51:14.548673 4728 generic.go:334] "Generic (PLEG): container finished" podID="a6865455-5888-4a57-a150-133eb3ffa9fe" containerID="7e264873ae9dda1c0648e7f792fa2322533e750d5fc58d7a772eb3cb8af448c8" exitCode=0 Feb 04 11:51:14 crc kubenswrapper[4728]: I0204 11:51:14.548734 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rqtl" event={"ID":"a6865455-5888-4a57-a150-133eb3ffa9fe","Type":"ContainerDied","Data":"7e264873ae9dda1c0648e7f792fa2322533e750d5fc58d7a772eb3cb8af448c8"} Feb 04 11:51:15 crc kubenswrapper[4728]: I0204 11:51:15.566561 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rqtl" event={"ID":"a6865455-5888-4a57-a150-133eb3ffa9fe","Type":"ContainerStarted","Data":"6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b"} Feb 04 11:51:15 crc kubenswrapper[4728]: I0204 11:51:15.592742 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6rqtl" podStartSLOduration=3.006765092 podStartE2EDuration="5.592723214s" podCreationTimestamp="2026-02-04 11:51:10 +0000 UTC" firstStartedPulling="2026-02-04 11:51:12.527175132 +0000 UTC m=+1421.669879517" lastFinishedPulling="2026-02-04 11:51:15.113133244 +0000 UTC m=+1424.255837639" observedRunningTime="2026-02-04 11:51:15.586430679 +0000 UTC m=+1424.729135054" watchObservedRunningTime="2026-02-04 11:51:15.592723214 +0000 UTC m=+1424.735427589" Feb 04 11:51:20 crc kubenswrapper[4728]: I0204 11:51:20.703521 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 04 11:51:21 crc kubenswrapper[4728]: I0204 11:51:21.152224 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:21 crc kubenswrapper[4728]: I0204 11:51:21.152263 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:21 crc kubenswrapper[4728]: I0204 11:51:21.198894 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:21 crc kubenswrapper[4728]: I0204 11:51:21.686820 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 04 11:51:21 crc kubenswrapper[4728]: I0204 11:51:21.689788 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:21 crc kubenswrapper[4728]: I0204 11:51:21.780379 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rqtl"] Feb 04 11:51:23 crc kubenswrapper[4728]: I0204 11:51:23.662258 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" event={"ID":"8e03d68e-aab9-4abb-969e-649efb0dc80a","Type":"ContainerStarted","Data":"f9c7bf8095a85e590e930ad29a237f9308bc5251d26458b0c8153dd30cbd92bb"} Feb 04 11:51:23 crc kubenswrapper[4728]: I0204 11:51:23.662351 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6rqtl" podUID="a6865455-5888-4a57-a150-133eb3ffa9fe" containerName="registry-server" containerID="cri-o://6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b" gracePeriod=2 Feb 04 11:51:23 crc kubenswrapper[4728]: I0204 11:51:23.697006 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" podStartSLOduration=1.920869003 podStartE2EDuration="12.696987081s" podCreationTimestamp="2026-02-04 11:51:11 +0000 UTC" firstStartedPulling="2026-02-04 11:51:12.077818534 +0000 UTC m=+1421.220522919" lastFinishedPulling="2026-02-04 11:51:22.853936612 +0000 UTC m=+1431.996640997" observedRunningTime="2026-02-04 11:51:23.686072493 +0000 UTC m=+1432.828776878" watchObservedRunningTime="2026-02-04 11:51:23.696987081 +0000 UTC m=+1432.839691466" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.166321 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.217708 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-catalog-content\") pod \"a6865455-5888-4a57-a150-133eb3ffa9fe\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.217884 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-utilities\") pod \"a6865455-5888-4a57-a150-133eb3ffa9fe\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.217942 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twnvl\" (UniqueName: \"kubernetes.io/projected/a6865455-5888-4a57-a150-133eb3ffa9fe-kube-api-access-twnvl\") pod \"a6865455-5888-4a57-a150-133eb3ffa9fe\" (UID: \"a6865455-5888-4a57-a150-133eb3ffa9fe\") " Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.218807 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-utilities" (OuterVolumeSpecName: "utilities") pod "a6865455-5888-4a57-a150-133eb3ffa9fe" (UID: "a6865455-5888-4a57-a150-133eb3ffa9fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.224356 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6865455-5888-4a57-a150-133eb3ffa9fe-kube-api-access-twnvl" (OuterVolumeSpecName: "kube-api-access-twnvl") pod "a6865455-5888-4a57-a150-133eb3ffa9fe" (UID: "a6865455-5888-4a57-a150-133eb3ffa9fe"). InnerVolumeSpecName "kube-api-access-twnvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.251037 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6865455-5888-4a57-a150-133eb3ffa9fe" (UID: "a6865455-5888-4a57-a150-133eb3ffa9fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.320514 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.320549 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6865455-5888-4a57-a150-133eb3ffa9fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.320560 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twnvl\" (UniqueName: \"kubernetes.io/projected/a6865455-5888-4a57-a150-133eb3ffa9fe-kube-api-access-twnvl\") on node \"crc\" DevicePath \"\"" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.674043 4728 generic.go:334] "Generic (PLEG): container finished" podID="a6865455-5888-4a57-a150-133eb3ffa9fe" containerID="6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b" exitCode=0 Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.674132 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rqtl" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.674121 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rqtl" event={"ID":"a6865455-5888-4a57-a150-133eb3ffa9fe","Type":"ContainerDied","Data":"6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b"} Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.674185 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rqtl" event={"ID":"a6865455-5888-4a57-a150-133eb3ffa9fe","Type":"ContainerDied","Data":"a53f022ae42d34cb6926df2e69ff97b4a5b9def6eaacd66a73adf0ba688f4707"} Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.674232 4728 scope.go:117] "RemoveContainer" containerID="6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.702412 4728 scope.go:117] "RemoveContainer" containerID="7e264873ae9dda1c0648e7f792fa2322533e750d5fc58d7a772eb3cb8af448c8" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.741571 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rqtl"] Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.746112 4728 scope.go:117] "RemoveContainer" containerID="4bedbdf102ab1ad6002ed1b84c08e8daa87b5e41b7f77df24d2ee9dae682beb6" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.759353 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rqtl"] Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.792007 4728 scope.go:117] "RemoveContainer" containerID="6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b" Feb 04 11:51:24 crc kubenswrapper[4728]: E0204 11:51:24.792477 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b\": container with ID starting with 6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b not found: ID does not exist" containerID="6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.792535 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b"} err="failed to get container status \"6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b\": rpc error: code = NotFound desc = could not find container \"6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b\": container with ID starting with 6702e284f7de8d8e05464ee3e76343056d5fd7fe758e76593390a04009139d7b not found: ID does not exist" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.792566 4728 scope.go:117] "RemoveContainer" containerID="7e264873ae9dda1c0648e7f792fa2322533e750d5fc58d7a772eb3cb8af448c8" Feb 04 11:51:24 crc kubenswrapper[4728]: E0204 11:51:24.792905 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e264873ae9dda1c0648e7f792fa2322533e750d5fc58d7a772eb3cb8af448c8\": container with ID starting with 7e264873ae9dda1c0648e7f792fa2322533e750d5fc58d7a772eb3cb8af448c8 not found: ID does not exist" containerID="7e264873ae9dda1c0648e7f792fa2322533e750d5fc58d7a772eb3cb8af448c8" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.793001 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e264873ae9dda1c0648e7f792fa2322533e750d5fc58d7a772eb3cb8af448c8"} err="failed to get container status \"7e264873ae9dda1c0648e7f792fa2322533e750d5fc58d7a772eb3cb8af448c8\": rpc error: code = NotFound desc = could not find container \"7e264873ae9dda1c0648e7f792fa2322533e750d5fc58d7a772eb3cb8af448c8\": container with ID starting with 7e264873ae9dda1c0648e7f792fa2322533e750d5fc58d7a772eb3cb8af448c8 not found: ID does not exist" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.793031 4728 scope.go:117] "RemoveContainer" containerID="4bedbdf102ab1ad6002ed1b84c08e8daa87b5e41b7f77df24d2ee9dae682beb6" Feb 04 11:51:24 crc kubenswrapper[4728]: E0204 11:51:24.793317 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bedbdf102ab1ad6002ed1b84c08e8daa87b5e41b7f77df24d2ee9dae682beb6\": container with ID starting with 4bedbdf102ab1ad6002ed1b84c08e8daa87b5e41b7f77df24d2ee9dae682beb6 not found: ID does not exist" containerID="4bedbdf102ab1ad6002ed1b84c08e8daa87b5e41b7f77df24d2ee9dae682beb6" Feb 04 11:51:24 crc kubenswrapper[4728]: I0204 11:51:24.793377 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bedbdf102ab1ad6002ed1b84c08e8daa87b5e41b7f77df24d2ee9dae682beb6"} err="failed to get container status \"4bedbdf102ab1ad6002ed1b84c08e8daa87b5e41b7f77df24d2ee9dae682beb6\": rpc error: code = NotFound desc = could not find container \"4bedbdf102ab1ad6002ed1b84c08e8daa87b5e41b7f77df24d2ee9dae682beb6\": container with ID starting with 4bedbdf102ab1ad6002ed1b84c08e8daa87b5e41b7f77df24d2ee9dae682beb6 not found: ID does not exist" Feb 04 11:51:25 crc kubenswrapper[4728]: I0204 11:51:25.564225 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6865455-5888-4a57-a150-133eb3ffa9fe" path="/var/lib/kubelet/pods/a6865455-5888-4a57-a150-133eb3ffa9fe/volumes" Feb 04 11:51:34 crc kubenswrapper[4728]: I0204 11:51:34.774423 4728 generic.go:334] "Generic (PLEG): container finished" podID="8e03d68e-aab9-4abb-969e-649efb0dc80a" containerID="f9c7bf8095a85e590e930ad29a237f9308bc5251d26458b0c8153dd30cbd92bb" exitCode=0 Feb 04 11:51:34 crc kubenswrapper[4728]: I0204 11:51:34.774498 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" event={"ID":"8e03d68e-aab9-4abb-969e-649efb0dc80a","Type":"ContainerDied","Data":"f9c7bf8095a85e590e930ad29a237f9308bc5251d26458b0c8153dd30cbd92bb"} Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.195955 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.263678 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-inventory\") pod \"8e03d68e-aab9-4abb-969e-649efb0dc80a\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.263747 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-repo-setup-combined-ca-bundle\") pod \"8e03d68e-aab9-4abb-969e-649efb0dc80a\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.264009 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljbfq\" (UniqueName: \"kubernetes.io/projected/8e03d68e-aab9-4abb-969e-649efb0dc80a-kube-api-access-ljbfq\") pod \"8e03d68e-aab9-4abb-969e-649efb0dc80a\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.264036 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-ssh-key-openstack-edpm-ipam\") pod \"8e03d68e-aab9-4abb-969e-649efb0dc80a\" (UID: \"8e03d68e-aab9-4abb-969e-649efb0dc80a\") " Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.269167 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8e03d68e-aab9-4abb-969e-649efb0dc80a" (UID: "8e03d68e-aab9-4abb-969e-649efb0dc80a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.269179 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e03d68e-aab9-4abb-969e-649efb0dc80a-kube-api-access-ljbfq" (OuterVolumeSpecName: "kube-api-access-ljbfq") pod "8e03d68e-aab9-4abb-969e-649efb0dc80a" (UID: "8e03d68e-aab9-4abb-969e-649efb0dc80a"). InnerVolumeSpecName "kube-api-access-ljbfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.293812 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8e03d68e-aab9-4abb-969e-649efb0dc80a" (UID: "8e03d68e-aab9-4abb-969e-649efb0dc80a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.293816 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-inventory" (OuterVolumeSpecName: "inventory") pod "8e03d68e-aab9-4abb-969e-649efb0dc80a" (UID: "8e03d68e-aab9-4abb-969e-649efb0dc80a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.365731 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljbfq\" (UniqueName: \"kubernetes.io/projected/8e03d68e-aab9-4abb-969e-649efb0dc80a-kube-api-access-ljbfq\") on node \"crc\" DevicePath \"\"" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.365780 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.365795 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.365807 4728 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e03d68e-aab9-4abb-969e-649efb0dc80a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.796022 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" event={"ID":"8e03d68e-aab9-4abb-969e-649efb0dc80a","Type":"ContainerDied","Data":"b9572bb08057f0b9eccd092a05aaae57b310d611190916bc5a44b1437388b8b7"} Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.796081 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9572bb08057f0b9eccd092a05aaae57b310d611190916bc5a44b1437388b8b7" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.796102 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.879911 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch"] Feb 04 11:51:36 crc kubenswrapper[4728]: E0204 11:51:36.880288 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e03d68e-aab9-4abb-969e-649efb0dc80a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.880309 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e03d68e-aab9-4abb-969e-649efb0dc80a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 04 11:51:36 crc kubenswrapper[4728]: E0204 11:51:36.880328 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6865455-5888-4a57-a150-133eb3ffa9fe" containerName="extract-content" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.880336 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6865455-5888-4a57-a150-133eb3ffa9fe" containerName="extract-content" Feb 04 11:51:36 crc kubenswrapper[4728]: E0204 11:51:36.880364 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6865455-5888-4a57-a150-133eb3ffa9fe" containerName="extract-utilities" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.880372 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6865455-5888-4a57-a150-133eb3ffa9fe" containerName="extract-utilities" Feb 04 11:51:36 crc kubenswrapper[4728]: E0204 11:51:36.880391 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6865455-5888-4a57-a150-133eb3ffa9fe" containerName="registry-server" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.880398 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6865455-5888-4a57-a150-133eb3ffa9fe" containerName="registry-server" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.880585 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e03d68e-aab9-4abb-969e-649efb0dc80a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.880595 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6865455-5888-4a57-a150-133eb3ffa9fe" containerName="registry-server" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.881447 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.885732 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.885893 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.885909 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.886802 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.892573 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch"] Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.977644 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4krtj\" (UniqueName: \"kubernetes.io/projected/84be5405-8879-4346-aeed-c55e106b37f7-kube-api-access-4krtj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gllch\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.977815 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gllch\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:36 crc kubenswrapper[4728]: I0204 11:51:36.977942 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gllch\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.051169 4728 scope.go:117] "RemoveContainer" containerID="19f8fb769c037cda88b380f934d9f29ab566e04fcb53660976556b6985c58a0f" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.074093 4728 scope.go:117] "RemoveContainer" containerID="8927439f6e4657451e2df09e29e2f50b38aeaecf07546b4380a751b9a8c9a3d7" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.079962 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gllch\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.080319 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gllch\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.080868 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4krtj\" (UniqueName: \"kubernetes.io/projected/84be5405-8879-4346-aeed-c55e106b37f7-kube-api-access-4krtj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gllch\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.084898 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gllch\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.086061 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gllch\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.098733 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4krtj\" (UniqueName: \"kubernetes.io/projected/84be5405-8879-4346-aeed-c55e106b37f7-kube-api-access-4krtj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gllch\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.160399 4728 scope.go:117] "RemoveContainer" containerID="ae1849fd6482bb0d8fb0c3ff1aaeba1d059d16bcb3b6febe3aa178beb91f0bc7" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.202581 4728 scope.go:117] "RemoveContainer" containerID="6af1b370baa1db28f086f14c8147be9a960bb5a6a385fa381dedf5bc0f8f6d10" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.214545 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.227008 4728 scope.go:117] "RemoveContainer" containerID="033f13c151d674a6dc88862bc77c1f88ce3e55701e2c2eafadf46fae428bf480" Feb 04 11:51:37 crc kubenswrapper[4728]: I0204 11:51:37.904803 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch"] Feb 04 11:51:38 crc kubenswrapper[4728]: I0204 11:51:38.814119 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" event={"ID":"84be5405-8879-4346-aeed-c55e106b37f7","Type":"ContainerStarted","Data":"fa7e59973c9c27dfc07a0319770c8e305458f7a57bdf843b9431399a8236a27b"} Feb 04 11:51:38 crc kubenswrapper[4728]: I0204 11:51:38.814443 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" event={"ID":"84be5405-8879-4346-aeed-c55e106b37f7","Type":"ContainerStarted","Data":"84bdd4cbeec92345882f88693b31368483f14f3dc344b4d6151b540562f62443"} Feb 04 11:51:41 crc kubenswrapper[4728]: I0204 11:51:41.840506 4728 generic.go:334] "Generic (PLEG): container finished" podID="84be5405-8879-4346-aeed-c55e106b37f7" containerID="fa7e59973c9c27dfc07a0319770c8e305458f7a57bdf843b9431399a8236a27b" exitCode=0 Feb 04 11:51:41 crc kubenswrapper[4728]: I0204 11:51:41.840598 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" event={"ID":"84be5405-8879-4346-aeed-c55e106b37f7","Type":"ContainerDied","Data":"fa7e59973c9c27dfc07a0319770c8e305458f7a57bdf843b9431399a8236a27b"} Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.214965 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.306812 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4krtj\" (UniqueName: \"kubernetes.io/projected/84be5405-8879-4346-aeed-c55e106b37f7-kube-api-access-4krtj\") pod \"84be5405-8879-4346-aeed-c55e106b37f7\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.307065 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-inventory\") pod \"84be5405-8879-4346-aeed-c55e106b37f7\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.307174 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-ssh-key-openstack-edpm-ipam\") pod \"84be5405-8879-4346-aeed-c55e106b37f7\" (UID: \"84be5405-8879-4346-aeed-c55e106b37f7\") " Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.312861 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84be5405-8879-4346-aeed-c55e106b37f7-kube-api-access-4krtj" (OuterVolumeSpecName: "kube-api-access-4krtj") pod "84be5405-8879-4346-aeed-c55e106b37f7" (UID: "84be5405-8879-4346-aeed-c55e106b37f7"). InnerVolumeSpecName "kube-api-access-4krtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.335169 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-inventory" (OuterVolumeSpecName: "inventory") pod "84be5405-8879-4346-aeed-c55e106b37f7" (UID: "84be5405-8879-4346-aeed-c55e106b37f7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.335469 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "84be5405-8879-4346-aeed-c55e106b37f7" (UID: "84be5405-8879-4346-aeed-c55e106b37f7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.409124 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.409166 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84be5405-8879-4346-aeed-c55e106b37f7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.409179 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4krtj\" (UniqueName: \"kubernetes.io/projected/84be5405-8879-4346-aeed-c55e106b37f7-kube-api-access-4krtj\") on node \"crc\" DevicePath \"\"" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.857219 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" event={"ID":"84be5405-8879-4346-aeed-c55e106b37f7","Type":"ContainerDied","Data":"84bdd4cbeec92345882f88693b31368483f14f3dc344b4d6151b540562f62443"} Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.857509 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84bdd4cbeec92345882f88693b31368483f14f3dc344b4d6151b540562f62443" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.857289 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gllch" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.918040 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5"] Feb 04 11:51:43 crc kubenswrapper[4728]: E0204 11:51:43.918427 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84be5405-8879-4346-aeed-c55e106b37f7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.918467 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="84be5405-8879-4346-aeed-c55e106b37f7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.918681 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="84be5405-8879-4346-aeed-c55e106b37f7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.919265 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.921252 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.921572 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.922569 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.926260 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 11:51:43 crc kubenswrapper[4728]: I0204 11:51:43.930794 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5"] Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.020045 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.020135 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlgvh\" (UniqueName: \"kubernetes.io/projected/56f043ba-0442-438f-80c8-64fc95caf1f0-kube-api-access-zlgvh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.020156 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.020211 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.121566 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlgvh\" (UniqueName: \"kubernetes.io/projected/56f043ba-0442-438f-80c8-64fc95caf1f0-kube-api-access-zlgvh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.121625 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.121702 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.121813 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.125859 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.126058 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.136504 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.138903 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlgvh\" (UniqueName: \"kubernetes.io/projected/56f043ba-0442-438f-80c8-64fc95caf1f0-kube-api-access-zlgvh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.243510 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.775561 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5"] Feb 04 11:51:44 crc kubenswrapper[4728]: I0204 11:51:44.879388 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" event={"ID":"56f043ba-0442-438f-80c8-64fc95caf1f0","Type":"ContainerStarted","Data":"483de75cbd30be925b5e712e8b9b38ae487607f1ebf1157ec3a947d6168d7c1a"} Feb 04 11:51:45 crc kubenswrapper[4728]: I0204 11:51:45.895936 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" event={"ID":"56f043ba-0442-438f-80c8-64fc95caf1f0","Type":"ContainerStarted","Data":"7cf306cf8508787a7c8b1f713e88566626f312a06a5f5f8d9e132ce85e45201c"} Feb 04 11:51:45 crc kubenswrapper[4728]: I0204 11:51:45.932880 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" podStartSLOduration=2.397585325 podStartE2EDuration="2.932852311s" podCreationTimestamp="2026-02-04 11:51:43 +0000 UTC" firstStartedPulling="2026-02-04 11:51:44.789385869 +0000 UTC m=+1453.932090254" lastFinishedPulling="2026-02-04 11:51:45.324652855 +0000 UTC m=+1454.467357240" observedRunningTime="2026-02-04 11:51:45.915110065 +0000 UTC m=+1455.057814480" watchObservedRunningTime="2026-02-04 11:51:45.932852311 +0000 UTC m=+1455.075556726" Feb 04 11:52:05 crc kubenswrapper[4728]: I0204 11:52:05.448804 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:52:05 crc kubenswrapper[4728]: I0204 11:52:05.449470 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:52:35 crc kubenswrapper[4728]: I0204 11:52:35.447987 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:52:35 crc kubenswrapper[4728]: I0204 11:52:35.448566 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:52:37 crc kubenswrapper[4728]: I0204 11:52:37.540432 4728 scope.go:117] "RemoveContainer" containerID="8fbf25577ac0a922eca161af9e01e59c9d946d5aa1da5780365d14d8be158f46" Feb 04 11:52:37 crc kubenswrapper[4728]: I0204 11:52:37.563271 4728 scope.go:117] "RemoveContainer" containerID="7977719a93e758c78f02f7d98ada25b4bccc6d39041bcd036a40072a0b3b5b90" Feb 04 11:52:37 crc kubenswrapper[4728]: I0204 11:52:37.622428 4728 scope.go:117] "RemoveContainer" containerID="d927e81bdac3eae88723b2f499ebf74789ab96548dc4dc4ae85b430926ee6f7c" Feb 04 11:52:37 crc kubenswrapper[4728]: I0204 11:52:37.658610 4728 scope.go:117] "RemoveContainer" containerID="289cbda8ea6a4c7297f403a901164493dcb757fb3f75a037ed41f2b23ecd4732" Feb 04 11:52:37 crc kubenswrapper[4728]: I0204 11:52:37.680924 4728 scope.go:117] "RemoveContainer" containerID="9e1532310c0f9fb9aedd929ebd28854cff0b381177df8c64401fa12424b76796" Feb 04 11:53:05 crc kubenswrapper[4728]: I0204 11:53:05.448872 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 11:53:05 crc kubenswrapper[4728]: I0204 11:53:05.449443 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 11:53:05 crc kubenswrapper[4728]: I0204 11:53:05.449485 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 11:53:05 crc kubenswrapper[4728]: I0204 11:53:05.450242 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 11:53:05 crc kubenswrapper[4728]: I0204 11:53:05.450308 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" gracePeriod=600 Feb 04 11:53:05 crc kubenswrapper[4728]: E0204 11:53:05.570537 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:53:05 crc kubenswrapper[4728]: I0204 11:53:05.631178 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" exitCode=0 Feb 04 11:53:05 crc kubenswrapper[4728]: I0204 11:53:05.631229 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994"} Feb 04 11:53:05 crc kubenswrapper[4728]: I0204 11:53:05.631272 4728 scope.go:117] "RemoveContainer" containerID="df14b9397f5cab1fc5b2e7a5ea922d0337cd8f0ecd7c5a6f65afe229e61d080f" Feb 04 11:53:05 crc kubenswrapper[4728]: I0204 11:53:05.632448 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:53:05 crc kubenswrapper[4728]: E0204 11:53:05.632733 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:53:20 crc kubenswrapper[4728]: I0204 11:53:20.554664 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:53:20 crc kubenswrapper[4728]: E0204 11:53:20.557588 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:53:34 crc kubenswrapper[4728]: I0204 11:53:34.554409 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:53:34 crc kubenswrapper[4728]: E0204 11:53:34.570729 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:53:37 crc kubenswrapper[4728]: I0204 11:53:37.747382 4728 scope.go:117] "RemoveContainer" containerID="9d521f00d9f66c3697767f5fda94c81721ff60ef59a102b3072eefab17f956c9" Feb 04 11:53:49 crc kubenswrapper[4728]: I0204 11:53:49.553580 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:53:49 crc kubenswrapper[4728]: E0204 11:53:49.554361 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:54:04 crc kubenswrapper[4728]: I0204 11:54:04.555168 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:54:04 crc kubenswrapper[4728]: E0204 11:54:04.555780 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:54:17 crc kubenswrapper[4728]: I0204 11:54:17.554239 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:54:17 crc kubenswrapper[4728]: E0204 11:54:17.554951 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:54:30 crc kubenswrapper[4728]: I0204 11:54:30.554557 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:54:30 crc kubenswrapper[4728]: E0204 11:54:30.555224 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:54:45 crc kubenswrapper[4728]: I0204 11:54:45.553971 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:54:45 crc kubenswrapper[4728]: E0204 11:54:45.554983 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:55:00 crc kubenswrapper[4728]: I0204 11:55:00.554347 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:55:00 crc kubenswrapper[4728]: E0204 11:55:00.557566 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:55:13 crc kubenswrapper[4728]: I0204 11:55:13.042587 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cn98m"] Feb 04 11:55:13 crc kubenswrapper[4728]: I0204 11:55:13.056642 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cn98m"] Feb 04 11:55:13 crc kubenswrapper[4728]: I0204 11:55:13.554428 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:55:13 crc kubenswrapper[4728]: E0204 11:55:13.554680 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:55:13 crc kubenswrapper[4728]: I0204 11:55:13.565496 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a684128-2a85-49af-857f-3d37de311853" path="/var/lib/kubelet/pods/5a684128-2a85-49af-857f-3d37de311853/volumes" Feb 04 11:55:15 crc kubenswrapper[4728]: I0204 11:55:15.028304 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0548-account-create-update-66scr"] Feb 04 11:55:15 crc kubenswrapper[4728]: I0204 11:55:15.036127 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0548-account-create-update-66scr"] Feb 04 11:55:15 crc kubenswrapper[4728]: I0204 11:55:15.565269 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3" path="/var/lib/kubelet/pods/956f1cf3-2478-4ec6-9b1c-04e15b5f3ee3/volumes" Feb 04 11:55:16 crc kubenswrapper[4728]: I0204 11:55:16.026742 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-2gkww"] Feb 04 11:55:16 crc kubenswrapper[4728]: I0204 11:55:16.042345 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-2gkww"] Feb 04 11:55:17 crc kubenswrapper[4728]: I0204 11:55:17.036503 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9211-account-create-update-bgnxc"] Feb 04 11:55:17 crc kubenswrapper[4728]: I0204 11:55:17.048120 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-ktzhs"] Feb 04 11:55:17 crc kubenswrapper[4728]: I0204 11:55:17.061801 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-ec4e-account-create-update-l68lt"] Feb 04 11:55:17 crc kubenswrapper[4728]: I0204 11:55:17.070191 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-ktzhs"] Feb 04 11:55:17 crc kubenswrapper[4728]: I0204 11:55:17.079022 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-ec4e-account-create-update-l68lt"] Feb 04 11:55:17 crc kubenswrapper[4728]: I0204 11:55:17.087574 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9211-account-create-update-bgnxc"] Feb 04 11:55:17 crc kubenswrapper[4728]: I0204 11:55:17.563581 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e18986c-826b-4478-a01a-29fcce1f946f" path="/var/lib/kubelet/pods/6e18986c-826b-4478-a01a-29fcce1f946f/volumes" Feb 04 11:55:17 crc kubenswrapper[4728]: I0204 11:55:17.564382 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e29c6f-daef-4720-805c-a5889be741e0" path="/var/lib/kubelet/pods/83e29c6f-daef-4720-805c-a5889be741e0/volumes" Feb 04 11:55:17 crc kubenswrapper[4728]: I0204 11:55:17.564967 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc936e27-7f59-4b4f-af88-3489bac544c0" path="/var/lib/kubelet/pods/bc936e27-7f59-4b4f-af88-3489bac544c0/volumes" Feb 04 11:55:17 crc kubenswrapper[4728]: I0204 11:55:17.565577 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beac9f4f-a615-4244-9ba8-ded8ce531f3b" path="/var/lib/kubelet/pods/beac9f4f-a615-4244-9ba8-ded8ce531f3b/volumes" Feb 04 11:55:24 crc kubenswrapper[4728]: I0204 11:55:24.881174 4728 generic.go:334] "Generic (PLEG): container finished" podID="56f043ba-0442-438f-80c8-64fc95caf1f0" containerID="7cf306cf8508787a7c8b1f713e88566626f312a06a5f5f8d9e132ce85e45201c" exitCode=0 Feb 04 11:55:24 crc kubenswrapper[4728]: I0204 11:55:24.881268 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" event={"ID":"56f043ba-0442-438f-80c8-64fc95caf1f0","Type":"ContainerDied","Data":"7cf306cf8508787a7c8b1f713e88566626f312a06a5f5f8d9e132ce85e45201c"} Feb 04 11:55:25 crc kubenswrapper[4728]: I0204 11:55:25.555047 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:55:25 crc kubenswrapper[4728]: E0204 11:55:25.555613 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.302024 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.411847 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-inventory\") pod \"56f043ba-0442-438f-80c8-64fc95caf1f0\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.411973 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-ssh-key-openstack-edpm-ipam\") pod \"56f043ba-0442-438f-80c8-64fc95caf1f0\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.412055 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlgvh\" (UniqueName: \"kubernetes.io/projected/56f043ba-0442-438f-80c8-64fc95caf1f0-kube-api-access-zlgvh\") pod \"56f043ba-0442-438f-80c8-64fc95caf1f0\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.412847 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-bootstrap-combined-ca-bundle\") pod \"56f043ba-0442-438f-80c8-64fc95caf1f0\" (UID: \"56f043ba-0442-438f-80c8-64fc95caf1f0\") " Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.418476 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "56f043ba-0442-438f-80c8-64fc95caf1f0" (UID: "56f043ba-0442-438f-80c8-64fc95caf1f0"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.419975 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f043ba-0442-438f-80c8-64fc95caf1f0-kube-api-access-zlgvh" (OuterVolumeSpecName: "kube-api-access-zlgvh") pod "56f043ba-0442-438f-80c8-64fc95caf1f0" (UID: "56f043ba-0442-438f-80c8-64fc95caf1f0"). InnerVolumeSpecName "kube-api-access-zlgvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.449262 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "56f043ba-0442-438f-80c8-64fc95caf1f0" (UID: "56f043ba-0442-438f-80c8-64fc95caf1f0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.450922 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-inventory" (OuterVolumeSpecName: "inventory") pod "56f043ba-0442-438f-80c8-64fc95caf1f0" (UID: "56f043ba-0442-438f-80c8-64fc95caf1f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.514819 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.514859 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.514876 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlgvh\" (UniqueName: \"kubernetes.io/projected/56f043ba-0442-438f-80c8-64fc95caf1f0-kube-api-access-zlgvh\") on node \"crc\" DevicePath \"\"" Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.514888 4728 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f043ba-0442-438f-80c8-64fc95caf1f0-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.903889 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" event={"ID":"56f043ba-0442-438f-80c8-64fc95caf1f0","Type":"ContainerDied","Data":"483de75cbd30be925b5e712e8b9b38ae487607f1ebf1157ec3a947d6168d7c1a"} Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.903928 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="483de75cbd30be925b5e712e8b9b38ae487607f1ebf1157ec3a947d6168d7c1a" Feb 04 11:55:26 crc kubenswrapper[4728]: I0204 11:55:26.903985 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.002549 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577"] Feb 04 11:55:27 crc kubenswrapper[4728]: E0204 11:55:27.002937 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f043ba-0442-438f-80c8-64fc95caf1f0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.002954 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f043ba-0442-438f-80c8-64fc95caf1f0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.003151 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f043ba-0442-438f-80c8-64fc95caf1f0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.003728 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.006847 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.007323 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.007604 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.008415 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.031698 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577"] Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.126242 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vv577\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.126416 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vv577\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.126479 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldlh7\" (UniqueName: \"kubernetes.io/projected/068d923c-e3c2-4221-8a84-7af590000487-kube-api-access-ldlh7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vv577\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.228582 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldlh7\" (UniqueName: \"kubernetes.io/projected/068d923c-e3c2-4221-8a84-7af590000487-kube-api-access-ldlh7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vv577\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.228664 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vv577\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.228783 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vv577\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.233533 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vv577\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.245600 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vv577\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.248466 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldlh7\" (UniqueName: \"kubernetes.io/projected/068d923c-e3c2-4221-8a84-7af590000487-kube-api-access-ldlh7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vv577\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.323118 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.883260 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577"] Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.888321 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 11:55:27 crc kubenswrapper[4728]: I0204 11:55:27.912555 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" event={"ID":"068d923c-e3c2-4221-8a84-7af590000487","Type":"ContainerStarted","Data":"8d68df788f92fbdaa54f68e29adf7b2643304ddda6d80fe590a15902f4d6aebf"} Feb 04 11:55:28 crc kubenswrapper[4728]: I0204 11:55:28.922411 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" event={"ID":"068d923c-e3c2-4221-8a84-7af590000487","Type":"ContainerStarted","Data":"f91eb69e75814b83531198707e318d347828269bbebbe1cb7d9268e8f49b7103"} Feb 04 11:55:28 crc kubenswrapper[4728]: I0204 11:55:28.943090 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" podStartSLOduration=2.402340724 podStartE2EDuration="2.943072589s" podCreationTimestamp="2026-02-04 11:55:26 +0000 UTC" firstStartedPulling="2026-02-04 11:55:27.888090629 +0000 UTC m=+1677.030795014" lastFinishedPulling="2026-02-04 11:55:28.428822494 +0000 UTC m=+1677.571526879" observedRunningTime="2026-02-04 11:55:28.936742684 +0000 UTC m=+1678.079447059" watchObservedRunningTime="2026-02-04 11:55:28.943072589 +0000 UTC m=+1678.085776974" Feb 04 11:55:37 crc kubenswrapper[4728]: I0204 11:55:37.042181 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q77rh"] Feb 04 11:55:37 crc kubenswrapper[4728]: I0204 11:55:37.051907 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-q77rh"] Feb 04 11:55:37 crc kubenswrapper[4728]: I0204 11:55:37.567359 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a480c5f-4928-4a57-bb8a-7c5017d06563" path="/var/lib/kubelet/pods/6a480c5f-4928-4a57-bb8a-7c5017d06563/volumes" Feb 04 11:55:37 crc kubenswrapper[4728]: I0204 11:55:37.814801 4728 scope.go:117] "RemoveContainer" containerID="7d4a15fc2cc0141be955db843e9bbbe7c1548f34593515af47bf79c189e1744a" Feb 04 11:55:37 crc kubenswrapper[4728]: I0204 11:55:37.838508 4728 scope.go:117] "RemoveContainer" containerID="36ddc99f8878083387d1e77b4af7e9c4113d1170b93f42016aea83bfb25d53ba" Feb 04 11:55:37 crc kubenswrapper[4728]: I0204 11:55:37.875163 4728 scope.go:117] "RemoveContainer" containerID="b5a0fc5178591163f1573028d90f5ea14c105fec0915974b0ad82d41f0f1c341" Feb 04 11:55:37 crc kubenswrapper[4728]: I0204 11:55:37.920239 4728 scope.go:117] "RemoveContainer" containerID="73ac02af509917ba82c429e8458ce626e04cd6d3c71b3655f16d21bca0c650be" Feb 04 11:55:37 crc kubenswrapper[4728]: I0204 11:55:37.981209 4728 scope.go:117] "RemoveContainer" containerID="d2a70547838fe1f358d66306c3fb124fbb8dab681755c1c77f57f252559ab0c8" Feb 04 11:55:38 crc kubenswrapper[4728]: I0204 11:55:38.015208 4728 scope.go:117] "RemoveContainer" containerID="a908af24ecb02b4fb391deca169fde709052ef74f63d4438c74a6d9699af300d" Feb 04 11:55:38 crc kubenswrapper[4728]: I0204 11:55:38.060912 4728 scope.go:117] "RemoveContainer" containerID="1fb7b2873fa6c0af713444af4006694de4367a209b847fbd2029944d045c62f9" Feb 04 11:55:38 crc kubenswrapper[4728]: I0204 11:55:38.084954 4728 scope.go:117] "RemoveContainer" containerID="97f108bc7ed74972f167ede74797502d8f80463477c9de97881830a8243c977e" Feb 04 11:55:38 crc kubenswrapper[4728]: I0204 11:55:38.125287 4728 scope.go:117] "RemoveContainer" containerID="41a7f8a2fe5ac89a48c85530baf21f0280aef28ed16598cbfeafa294b252759f" Feb 04 11:55:38 crc kubenswrapper[4728]: I0204 11:55:38.553547 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:55:38 crc kubenswrapper[4728]: E0204 11:55:38.554770 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:55:49 crc kubenswrapper[4728]: I0204 11:55:49.553576 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:55:49 crc kubenswrapper[4728]: E0204 11:55:49.554383 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.069636 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-lwtzn"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.089258 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-64d0-account-create-update-xrn2s"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.103784 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-kvjt5"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.113662 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-f565-account-create-update-hn9mk"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.123626 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-m8z6k"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.131935 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mg5hg"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.139883 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-gzzfl"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.147390 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-11c2-account-create-update-kzkht"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.155464 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-lwtzn"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.166081 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0249-account-create-update-jjdf2"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.173969 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-m8z6k"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.181554 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-64d0-account-create-update-xrn2s"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.188261 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-kvjt5"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.196458 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mg5hg"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.210975 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-f565-account-create-update-hn9mk"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.225260 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-gzzfl"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.235034 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-11c2-account-create-update-kzkht"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.242008 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0249-account-create-update-jjdf2"] Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.567792 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c9cdbe1-394e-4100-ad1c-53851f9955fb" path="/var/lib/kubelet/pods/1c9cdbe1-394e-4100-ad1c-53851f9955fb/volumes" Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.568521 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ebaecfd-261e-41a3-be7b-9e21a9b7a10a" path="/var/lib/kubelet/pods/1ebaecfd-261e-41a3-be7b-9e21a9b7a10a/volumes" Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.569146 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3528d762-0e78-4914-ae55-f11bb812f322" path="/var/lib/kubelet/pods/3528d762-0e78-4914-ae55-f11bb812f322/volumes" Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.569674 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fac279-e557-4505-8a93-d7610f2326f0" path="/var/lib/kubelet/pods/37fac279-e557-4505-8a93-d7610f2326f0/volumes" Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.570999 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54311305-ef02-48b5-a913-4b5c8fa9730b" path="/var/lib/kubelet/pods/54311305-ef02-48b5-a913-4b5c8fa9730b/volumes" Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.571658 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5552fcdf-e47f-47e8-acde-ed2e74f54188" path="/var/lib/kubelet/pods/5552fcdf-e47f-47e8-acde-ed2e74f54188/volumes" Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.572465 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81194f90-78b9-463c-a83e-adce1621a8ec" path="/var/lib/kubelet/pods/81194f90-78b9-463c-a83e-adce1621a8ec/volumes" Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.573473 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a84078cf-9bd8-4920-9537-c4d1f863e4b2" path="/var/lib/kubelet/pods/a84078cf-9bd8-4920-9537-c4d1f863e4b2/volumes" Feb 04 11:55:53 crc kubenswrapper[4728]: I0204 11:55:53.574020 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfe3ee4-3d03-4e23-b977-b90d512610ab" path="/var/lib/kubelet/pods/bdfe3ee4-3d03-4e23-b977-b90d512610ab/volumes" Feb 04 11:55:58 crc kubenswrapper[4728]: I0204 11:55:58.039389 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qdzqf"] Feb 04 11:55:58 crc kubenswrapper[4728]: I0204 11:55:58.052614 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qdzqf"] Feb 04 11:55:59 crc kubenswrapper[4728]: I0204 11:55:59.564249 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8279c44-c9f5-40f7-a933-bf7b91f30750" path="/var/lib/kubelet/pods/d8279c44-c9f5-40f7-a933-bf7b91f30750/volumes" Feb 04 11:56:02 crc kubenswrapper[4728]: I0204 11:56:02.554072 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:56:02 crc kubenswrapper[4728]: E0204 11:56:02.554792 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:56:15 crc kubenswrapper[4728]: I0204 11:56:15.553965 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:56:15 crc kubenswrapper[4728]: E0204 11:56:15.554549 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:56:28 crc kubenswrapper[4728]: I0204 11:56:28.039922 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kj7rj"] Feb 04 11:56:28 crc kubenswrapper[4728]: I0204 11:56:28.048193 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kj7rj"] Feb 04 11:56:28 crc kubenswrapper[4728]: I0204 11:56:28.553519 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:56:28 crc kubenswrapper[4728]: E0204 11:56:28.553842 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:56:29 crc kubenswrapper[4728]: I0204 11:56:29.568627 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3187936-5c4f-4c33-ae73-63309fa067aa" path="/var/lib/kubelet/pods/e3187936-5c4f-4c33-ae73-63309fa067aa/volumes" Feb 04 11:56:32 crc kubenswrapper[4728]: I0204 11:56:32.033881 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-xrxlt"] Feb 04 11:56:32 crc kubenswrapper[4728]: I0204 11:56:32.043213 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-xrxlt"] Feb 04 11:56:33 crc kubenswrapper[4728]: I0204 11:56:33.572995 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e456f3f-3163-48da-9e88-aaddade811b6" path="/var/lib/kubelet/pods/0e456f3f-3163-48da-9e88-aaddade811b6/volumes" Feb 04 11:56:37 crc kubenswrapper[4728]: I0204 11:56:37.031543 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-z5s6t"] Feb 04 11:56:37 crc kubenswrapper[4728]: I0204 11:56:37.040367 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-z5s6t"] Feb 04 11:56:37 crc kubenswrapper[4728]: I0204 11:56:37.564814 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8339046-9234-4489-b308-c592a7afa3b0" path="/var/lib/kubelet/pods/e8339046-9234-4489-b308-c592a7afa3b0/volumes" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.271100 4728 scope.go:117] "RemoveContainer" containerID="1b0ae7805d043aaaa9d704d909ca5ad0b5f83d471d67241cf83453be403bed11" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.305473 4728 scope.go:117] "RemoveContainer" containerID="57a2b04e6bb372fa84d619cb384d7f6c9e8ed4259615c3ecc6a06af4bdcda13c" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.369507 4728 scope.go:117] "RemoveContainer" containerID="a3f4757b77263aea4d424adc927e527d754e0638a16e59f7998abe8d7fd8d28d" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.407401 4728 scope.go:117] "RemoveContainer" containerID="b6aed950b1defebc26df71094e7110da169177e75b9abd57da391ef1a323cfea" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.476500 4728 scope.go:117] "RemoveContainer" containerID="fd03290c8c69db507ffea81d50b57b9a2266eae929a90580d2046112af1a45b7" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.496871 4728 scope.go:117] "RemoveContainer" containerID="cac95b3f8edc828472f9a5ecb75eb070d6c5dc25c7500917acb5f54c9dff69c1" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.553843 4728 scope.go:117] "RemoveContainer" containerID="f9eab5e7a6ce22cf85a477438cd5a84bb945d38339ec548ca84640f72cc21061" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.574572 4728 scope.go:117] "RemoveContainer" containerID="cf1e0a86d567568bbdcf29fa76aa28480ffa62887364760579f8193851c0116d" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.594708 4728 scope.go:117] "RemoveContainer" containerID="fa26f10b54d72ad9d06108aa3070a2337f7c5d879ba5987b4395ea43542f7cc1" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.623549 4728 scope.go:117] "RemoveContainer" containerID="d739f6ccc734ed98ae7e62cf6daada056c75d39d07dba6c295e2d68ecb7a6d52" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.645893 4728 scope.go:117] "RemoveContainer" containerID="674ea414f42b45dfd5f58fa6f85a9d498b791146600682fc5fbdc2205ab77ff8" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.679815 4728 scope.go:117] "RemoveContainer" containerID="31ea12d26a9507e72ec3a47653329e35eb6dedbddea80008a07362458153c0a4" Feb 04 11:56:38 crc kubenswrapper[4728]: I0204 11:56:38.697886 4728 scope.go:117] "RemoveContainer" containerID="9df5a4154fa2b461b55b57b3c380076491c19628ebc6a5f5b12f41fc0eddf522" Feb 04 11:56:42 crc kubenswrapper[4728]: I0204 11:56:42.554468 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:56:42 crc kubenswrapper[4728]: E0204 11:56:42.555385 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:56:46 crc kubenswrapper[4728]: I0204 11:56:46.030590 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-nqv8q"] Feb 04 11:56:46 crc kubenswrapper[4728]: I0204 11:56:46.039509 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-nqv8q"] Feb 04 11:56:47 crc kubenswrapper[4728]: I0204 11:56:47.577611 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad93666-c664-4ded-8970-993c847ac437" path="/var/lib/kubelet/pods/aad93666-c664-4ded-8970-993c847ac437/volumes" Feb 04 11:56:55 crc kubenswrapper[4728]: I0204 11:56:55.057050 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-jx9lp"] Feb 04 11:56:55 crc kubenswrapper[4728]: I0204 11:56:55.063982 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-jx9lp"] Feb 04 11:56:55 crc kubenswrapper[4728]: I0204 11:56:55.554212 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:56:55 crc kubenswrapper[4728]: E0204 11:56:55.554531 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:56:55 crc kubenswrapper[4728]: I0204 11:56:55.563589 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2186aabd-28ff-488a-a224-01c14710adac" path="/var/lib/kubelet/pods/2186aabd-28ff-488a-a224-01c14710adac/volumes" Feb 04 11:56:56 crc kubenswrapper[4728]: I0204 11:56:56.026101 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kgzbm"] Feb 04 11:56:56 crc kubenswrapper[4728]: I0204 11:56:56.033565 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kgzbm"] Feb 04 11:56:57 crc kubenswrapper[4728]: I0204 11:56:57.565421 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d949b343-bfde-4d50-81b1-a7c66765c076" path="/var/lib/kubelet/pods/d949b343-bfde-4d50-81b1-a7c66765c076/volumes" Feb 04 11:57:06 crc kubenswrapper[4728]: I0204 11:57:06.554150 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:57:06 crc kubenswrapper[4728]: E0204 11:57:06.555466 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:57:20 crc kubenswrapper[4728]: I0204 11:57:20.553916 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:57:20 crc kubenswrapper[4728]: E0204 11:57:20.554786 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:57:32 crc kubenswrapper[4728]: I0204 11:57:32.086153 4728 generic.go:334] "Generic (PLEG): container finished" podID="068d923c-e3c2-4221-8a84-7af590000487" containerID="f91eb69e75814b83531198707e318d347828269bbebbe1cb7d9268e8f49b7103" exitCode=0 Feb 04 11:57:32 crc kubenswrapper[4728]: I0204 11:57:32.086252 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" event={"ID":"068d923c-e3c2-4221-8a84-7af590000487","Type":"ContainerDied","Data":"f91eb69e75814b83531198707e318d347828269bbebbe1cb7d9268e8f49b7103"} Feb 04 11:57:33 crc kubenswrapper[4728]: I0204 11:57:33.548054 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:57:33 crc kubenswrapper[4728]: I0204 11:57:33.694860 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldlh7\" (UniqueName: \"kubernetes.io/projected/068d923c-e3c2-4221-8a84-7af590000487-kube-api-access-ldlh7\") pod \"068d923c-e3c2-4221-8a84-7af590000487\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " Feb 04 11:57:33 crc kubenswrapper[4728]: I0204 11:57:33.695005 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-inventory\") pod \"068d923c-e3c2-4221-8a84-7af590000487\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " Feb 04 11:57:33 crc kubenswrapper[4728]: I0204 11:57:33.695075 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-ssh-key-openstack-edpm-ipam\") pod \"068d923c-e3c2-4221-8a84-7af590000487\" (UID: \"068d923c-e3c2-4221-8a84-7af590000487\") " Feb 04 11:57:33 crc kubenswrapper[4728]: I0204 11:57:33.700182 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068d923c-e3c2-4221-8a84-7af590000487-kube-api-access-ldlh7" (OuterVolumeSpecName: "kube-api-access-ldlh7") pod "068d923c-e3c2-4221-8a84-7af590000487" (UID: "068d923c-e3c2-4221-8a84-7af590000487"). InnerVolumeSpecName "kube-api-access-ldlh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:57:33 crc kubenswrapper[4728]: I0204 11:57:33.720901 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-inventory" (OuterVolumeSpecName: "inventory") pod "068d923c-e3c2-4221-8a84-7af590000487" (UID: "068d923c-e3c2-4221-8a84-7af590000487"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:57:33 crc kubenswrapper[4728]: I0204 11:57:33.729272 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "068d923c-e3c2-4221-8a84-7af590000487" (UID: "068d923c-e3c2-4221-8a84-7af590000487"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:57:33 crc kubenswrapper[4728]: I0204 11:57:33.798206 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldlh7\" (UniqueName: \"kubernetes.io/projected/068d923c-e3c2-4221-8a84-7af590000487-kube-api-access-ldlh7\") on node \"crc\" DevicePath \"\"" Feb 04 11:57:33 crc kubenswrapper[4728]: I0204 11:57:33.798605 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 11:57:33 crc kubenswrapper[4728]: I0204 11:57:33.798685 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/068d923c-e3c2-4221-8a84-7af590000487-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.109838 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" event={"ID":"068d923c-e3c2-4221-8a84-7af590000487","Type":"ContainerDied","Data":"8d68df788f92fbdaa54f68e29adf7b2643304ddda6d80fe590a15902f4d6aebf"} Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.109902 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vv577" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.109900 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d68df788f92fbdaa54f68e29adf7b2643304ddda6d80fe590a15902f4d6aebf" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.220391 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6"] Feb 04 11:57:34 crc kubenswrapper[4728]: E0204 11:57:34.220968 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068d923c-e3c2-4221-8a84-7af590000487" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.220989 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="068d923c-e3c2-4221-8a84-7af590000487" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.221183 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="068d923c-e3c2-4221-8a84-7af590000487" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.221737 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.223675 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.224068 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.224256 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.225194 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.240153 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6"] Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.310068 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.310143 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b49pv\" (UniqueName: \"kubernetes.io/projected/6e9765bf-d2ef-4596-ab24-046221ee1d97-kube-api-access-b49pv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.310183 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.411890 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.412303 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.412451 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b49pv\" (UniqueName: \"kubernetes.io/projected/6e9765bf-d2ef-4596-ab24-046221ee1d97-kube-api-access-b49pv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.416945 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.418731 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.431955 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b49pv\" (UniqueName: \"kubernetes.io/projected/6e9765bf-d2ef-4596-ab24-046221ee1d97-kube-api-access-b49pv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.542694 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:57:34 crc kubenswrapper[4728]: I0204 11:57:34.555631 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:57:34 crc kubenswrapper[4728]: E0204 11:57:34.557550 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:57:35 crc kubenswrapper[4728]: I0204 11:57:35.073715 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6"] Feb 04 11:57:35 crc kubenswrapper[4728]: W0204 11:57:35.079871 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e9765bf_d2ef_4596_ab24_046221ee1d97.slice/crio-f570996290c4062f8013b12d18a0ea8d137bee4f89e0ef8f64922c6c38f21daa WatchSource:0}: Error finding container f570996290c4062f8013b12d18a0ea8d137bee4f89e0ef8f64922c6c38f21daa: Status 404 returned error can't find the container with id f570996290c4062f8013b12d18a0ea8d137bee4f89e0ef8f64922c6c38f21daa Feb 04 11:57:35 crc kubenswrapper[4728]: I0204 11:57:35.119991 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" event={"ID":"6e9765bf-d2ef-4596-ab24-046221ee1d97","Type":"ContainerStarted","Data":"f570996290c4062f8013b12d18a0ea8d137bee4f89e0ef8f64922c6c38f21daa"} Feb 04 11:57:36 crc kubenswrapper[4728]: I0204 11:57:36.129803 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" event={"ID":"6e9765bf-d2ef-4596-ab24-046221ee1d97","Type":"ContainerStarted","Data":"aa30a7c62fcb8beef226e1f6646bdb28d73bd1d98976aa283d4cfa9045dd46bb"} Feb 04 11:57:36 crc kubenswrapper[4728]: I0204 11:57:36.173639 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" podStartSLOduration=1.428299084 podStartE2EDuration="2.173617466s" podCreationTimestamp="2026-02-04 11:57:34 +0000 UTC" firstStartedPulling="2026-02-04 11:57:35.081871964 +0000 UTC m=+1804.224576349" lastFinishedPulling="2026-02-04 11:57:35.827190336 +0000 UTC m=+1804.969894731" observedRunningTime="2026-02-04 11:57:36.148157898 +0000 UTC m=+1805.290862293" watchObservedRunningTime="2026-02-04 11:57:36.173617466 +0000 UTC m=+1805.316321861" Feb 04 11:57:38 crc kubenswrapper[4728]: I0204 11:57:38.054729 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-mz5w2"] Feb 04 11:57:38 crc kubenswrapper[4728]: I0204 11:57:38.067679 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-mz5w2"] Feb 04 11:57:38 crc kubenswrapper[4728]: I0204 11:57:38.896039 4728 scope.go:117] "RemoveContainer" containerID="974db8f25fb6bef41dc01e78218ecc1a75f18ef547a5ac411dea800b4a63e201" Feb 04 11:57:38 crc kubenswrapper[4728]: I0204 11:57:38.947996 4728 scope.go:117] "RemoveContainer" containerID="7106a66ab3aad23ff83211ffa2347875488e4e58c79e3fe0cb3d4223bea9d26e" Feb 04 11:57:38 crc kubenswrapper[4728]: I0204 11:57:38.988623 4728 scope.go:117] "RemoveContainer" containerID="d9374a6fcb36da146d33383f340663934c808b4ad02b8eb86ccf4b5daef8e937" Feb 04 11:57:39 crc kubenswrapper[4728]: I0204 11:57:39.032317 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7e37-account-create-update-svvcv"] Feb 04 11:57:39 crc kubenswrapper[4728]: I0204 11:57:39.049144 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-v7lw4"] Feb 04 11:57:39 crc kubenswrapper[4728]: I0204 11:57:39.057552 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-v7lw4"] Feb 04 11:57:39 crc kubenswrapper[4728]: I0204 11:57:39.064930 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7e37-account-create-update-svvcv"] Feb 04 11:57:39 crc kubenswrapper[4728]: I0204 11:57:39.568365 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ffe9e93-b683-4326-b0dd-ec6eb798ab50" path="/var/lib/kubelet/pods/8ffe9e93-b683-4326-b0dd-ec6eb798ab50/volumes" Feb 04 11:57:39 crc kubenswrapper[4728]: I0204 11:57:39.569796 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63ed46b-54ee-4fe9-adca-5986b1befc95" path="/var/lib/kubelet/pods/c63ed46b-54ee-4fe9-adca-5986b1befc95/volumes" Feb 04 11:57:39 crc kubenswrapper[4728]: I0204 11:57:39.570937 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6bf70d2-1257-434e-9597-b4c98e4bb63b" path="/var/lib/kubelet/pods/e6bf70d2-1257-434e-9597-b4c98e4bb63b/volumes" Feb 04 11:57:40 crc kubenswrapper[4728]: I0204 11:57:40.061331 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ab4b-account-create-update-mk5gq"] Feb 04 11:57:40 crc kubenswrapper[4728]: I0204 11:57:40.072320 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-81ae-account-create-update-c6p7w"] Feb 04 11:57:40 crc kubenswrapper[4728]: I0204 11:57:40.084662 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wggnf"] Feb 04 11:57:40 crc kubenswrapper[4728]: I0204 11:57:40.097218 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ab4b-account-create-update-mk5gq"] Feb 04 11:57:40 crc kubenswrapper[4728]: I0204 11:57:40.107813 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-81ae-account-create-update-c6p7w"] Feb 04 11:57:40 crc kubenswrapper[4728]: I0204 11:57:40.123660 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wggnf"] Feb 04 11:57:41 crc kubenswrapper[4728]: I0204 11:57:41.566274 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0" path="/var/lib/kubelet/pods/2370ecc3-4b7f-47ae-b0ef-288cdbd6f4b0/volumes" Feb 04 11:57:41 crc kubenswrapper[4728]: I0204 11:57:41.566876 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c91a14-9080-4840-bc0e-9e6b103d9d01" path="/var/lib/kubelet/pods/92c91a14-9080-4840-bc0e-9e6b103d9d01/volumes" Feb 04 11:57:41 crc kubenswrapper[4728]: I0204 11:57:41.567375 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e23d5efb-8f3f-40cf-992b-00aa2416f23b" path="/var/lib/kubelet/pods/e23d5efb-8f3f-40cf-992b-00aa2416f23b/volumes" Feb 04 11:57:45 crc kubenswrapper[4728]: I0204 11:57:45.554097 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:57:45 crc kubenswrapper[4728]: E0204 11:57:45.554369 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:57:57 crc kubenswrapper[4728]: I0204 11:57:57.554087 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:57:57 crc kubenswrapper[4728]: E0204 11:57:57.554863 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 11:58:07 crc kubenswrapper[4728]: I0204 11:58:07.047854 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j4npp"] Feb 04 11:58:07 crc kubenswrapper[4728]: I0204 11:58:07.059624 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j4npp"] Feb 04 11:58:07 crc kubenswrapper[4728]: I0204 11:58:07.565417 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71" path="/var/lib/kubelet/pods/b8b00c45-cc7b-4de0-876c-9c1ea1bb1f71/volumes" Feb 04 11:58:12 crc kubenswrapper[4728]: I0204 11:58:12.554788 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 11:58:13 crc kubenswrapper[4728]: I0204 11:58:13.475541 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"529af0f7f966a4ea0b6e4a1f05c7ef144a460c0249245b7c950d3e46bc1f0c22"} Feb 04 11:58:27 crc kubenswrapper[4728]: I0204 11:58:27.051148 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hssm4"] Feb 04 11:58:27 crc kubenswrapper[4728]: I0204 11:58:27.058803 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hssm4"] Feb 04 11:58:27 crc kubenswrapper[4728]: I0204 11:58:27.571339 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fed81957-c76f-4a31-837d-947294fe38a4" path="/var/lib/kubelet/pods/fed81957-c76f-4a31-837d-947294fe38a4/volumes" Feb 04 11:58:31 crc kubenswrapper[4728]: I0204 11:58:31.029018 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z8hwn"] Feb 04 11:58:31 crc kubenswrapper[4728]: I0204 11:58:31.038199 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z8hwn"] Feb 04 11:58:31 crc kubenswrapper[4728]: I0204 11:58:31.567273 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9187093-4720-491c-b6a0-8a7bdafab687" path="/var/lib/kubelet/pods/f9187093-4720-491c-b6a0-8a7bdafab687/volumes" Feb 04 11:58:39 crc kubenswrapper[4728]: I0204 11:58:39.095971 4728 scope.go:117] "RemoveContainer" containerID="c3f76dd9e78445b250116ba41045e59d04fcdbd9f472e417dbde64ab8f860aca" Feb 04 11:58:39 crc kubenswrapper[4728]: I0204 11:58:39.128979 4728 scope.go:117] "RemoveContainer" containerID="7de2cdd8aa4f383e8d1b12ce3e65304c53384060beca7b2935a4581e53b61461" Feb 04 11:58:39 crc kubenswrapper[4728]: I0204 11:58:39.228516 4728 scope.go:117] "RemoveContainer" containerID="10ad17bc3016f1455b1c2b70dabdb568661c4df8e73566036664c1fca6c3e097" Feb 04 11:58:39 crc kubenswrapper[4728]: I0204 11:58:39.248497 4728 scope.go:117] "RemoveContainer" containerID="b2e424e97a1b32bd70eddc0a5c12764e595ce2f0c4e993c5e0c0a465963210a0" Feb 04 11:58:39 crc kubenswrapper[4728]: I0204 11:58:39.293339 4728 scope.go:117] "RemoveContainer" containerID="c65cd0dfe0e3f33d8c1c82ddaba005e76585e312197cc1e78c077290dde8e154" Feb 04 11:58:39 crc kubenswrapper[4728]: I0204 11:58:39.359379 4728 scope.go:117] "RemoveContainer" containerID="faadb80ff7fbc70da95feb229b34b7951cbaa1d0bad0e3c8e0fb8784523c2f0c" Feb 04 11:58:39 crc kubenswrapper[4728]: I0204 11:58:39.401736 4728 scope.go:117] "RemoveContainer" containerID="d9c3d4a255e4b4b6367a727dea0b48cf1404b72f97ad4fddd862953f8ebe992f" Feb 04 11:58:39 crc kubenswrapper[4728]: I0204 11:58:39.425421 4728 scope.go:117] "RemoveContainer" containerID="7087b35f7c7b90f96088a2ba96a9769cf267eddd4cb3a1c1b2df04c3649b12b1" Feb 04 11:58:39 crc kubenswrapper[4728]: I0204 11:58:39.470738 4728 scope.go:117] "RemoveContainer" containerID="f10851a5a2dcbd0e973ec2bf9a3dbf09487aaa302e171efebde68e286d1bae6e" Feb 04 11:58:57 crc kubenswrapper[4728]: I0204 11:58:57.884661 4728 generic.go:334] "Generic (PLEG): container finished" podID="6e9765bf-d2ef-4596-ab24-046221ee1d97" containerID="aa30a7c62fcb8beef226e1f6646bdb28d73bd1d98976aa283d4cfa9045dd46bb" exitCode=0 Feb 04 11:58:57 crc kubenswrapper[4728]: I0204 11:58:57.884813 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" event={"ID":"6e9765bf-d2ef-4596-ab24-046221ee1d97","Type":"ContainerDied","Data":"aa30a7c62fcb8beef226e1f6646bdb28d73bd1d98976aa283d4cfa9045dd46bb"} Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.432027 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.465118 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-inventory\") pod \"6e9765bf-d2ef-4596-ab24-046221ee1d97\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.465332 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b49pv\" (UniqueName: \"kubernetes.io/projected/6e9765bf-d2ef-4596-ab24-046221ee1d97-kube-api-access-b49pv\") pod \"6e9765bf-d2ef-4596-ab24-046221ee1d97\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.465414 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-ssh-key-openstack-edpm-ipam\") pod \"6e9765bf-d2ef-4596-ab24-046221ee1d97\" (UID: \"6e9765bf-d2ef-4596-ab24-046221ee1d97\") " Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.483259 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9765bf-d2ef-4596-ab24-046221ee1d97-kube-api-access-b49pv" (OuterVolumeSpecName: "kube-api-access-b49pv") pod "6e9765bf-d2ef-4596-ab24-046221ee1d97" (UID: "6e9765bf-d2ef-4596-ab24-046221ee1d97"). InnerVolumeSpecName "kube-api-access-b49pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.526912 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6e9765bf-d2ef-4596-ab24-046221ee1d97" (UID: "6e9765bf-d2ef-4596-ab24-046221ee1d97"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.526956 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-inventory" (OuterVolumeSpecName: "inventory") pod "6e9765bf-d2ef-4596-ab24-046221ee1d97" (UID: "6e9765bf-d2ef-4596-ab24-046221ee1d97"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.569056 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.569092 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b49pv\" (UniqueName: \"kubernetes.io/projected/6e9765bf-d2ef-4596-ab24-046221ee1d97-kube-api-access-b49pv\") on node \"crc\" DevicePath \"\"" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.569105 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e9765bf-d2ef-4596-ab24-046221ee1d97-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.907568 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" event={"ID":"6e9765bf-d2ef-4596-ab24-046221ee1d97","Type":"ContainerDied","Data":"f570996290c4062f8013b12d18a0ea8d137bee4f89e0ef8f64922c6c38f21daa"} Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.908109 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f570996290c4062f8013b12d18a0ea8d137bee4f89e0ef8f64922c6c38f21daa" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.907633 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.989606 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc"] Feb 04 11:58:59 crc kubenswrapper[4728]: E0204 11:58:59.990359 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9765bf-d2ef-4596-ab24-046221ee1d97" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.990381 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9765bf-d2ef-4596-ab24-046221ee1d97" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.990581 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9765bf-d2ef-4596-ab24-046221ee1d97" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.991163 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.995121 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.995828 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.996865 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 11:58:59 crc kubenswrapper[4728]: I0204 11:58:59.999524 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.006746 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc"] Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.078020 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.078170 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szr5s\" (UniqueName: \"kubernetes.io/projected/a257135e-ed50-4619-adf8-c7a29970062c-kube-api-access-szr5s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.078303 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.180040 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.180116 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.180202 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szr5s\" (UniqueName: \"kubernetes.io/projected/a257135e-ed50-4619-adf8-c7a29970062c-kube-api-access-szr5s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.185059 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.185462 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.200663 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szr5s\" (UniqueName: \"kubernetes.io/projected/a257135e-ed50-4619-adf8-c7a29970062c-kube-api-access-szr5s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.309683 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.825733 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc"] Feb 04 11:59:00 crc kubenswrapper[4728]: I0204 11:59:00.916894 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" event={"ID":"a257135e-ed50-4619-adf8-c7a29970062c","Type":"ContainerStarted","Data":"bb9d8a87f4083c48fbe84e4c9e4e674979344d264bde5ed3d1765caa3bd3afb1"} Feb 04 11:59:05 crc kubenswrapper[4728]: I0204 11:59:05.967085 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" event={"ID":"a257135e-ed50-4619-adf8-c7a29970062c","Type":"ContainerStarted","Data":"de7cc4753cfac291e5fb027861e223c58a2f73aaa3b1ace9178456b82fbce765"} Feb 04 11:59:05 crc kubenswrapper[4728]: I0204 11:59:05.994981 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" podStartSLOduration=2.499824326 podStartE2EDuration="6.994960441s" podCreationTimestamp="2026-02-04 11:58:59 +0000 UTC" firstStartedPulling="2026-02-04 11:59:00.836492306 +0000 UTC m=+1889.979196691" lastFinishedPulling="2026-02-04 11:59:05.331628411 +0000 UTC m=+1894.474332806" observedRunningTime="2026-02-04 11:59:05.983802496 +0000 UTC m=+1895.126506881" watchObservedRunningTime="2026-02-04 11:59:05.994960441 +0000 UTC m=+1895.137664846" Feb 04 11:59:10 crc kubenswrapper[4728]: I0204 11:59:10.022400 4728 generic.go:334] "Generic (PLEG): container finished" podID="a257135e-ed50-4619-adf8-c7a29970062c" containerID="de7cc4753cfac291e5fb027861e223c58a2f73aaa3b1ace9178456b82fbce765" exitCode=0 Feb 04 11:59:10 crc kubenswrapper[4728]: I0204 11:59:10.022501 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" event={"ID":"a257135e-ed50-4619-adf8-c7a29970062c","Type":"ContainerDied","Data":"de7cc4753cfac291e5fb027861e223c58a2f73aaa3b1ace9178456b82fbce765"} Feb 04 11:59:11 crc kubenswrapper[4728]: I0204 11:59:11.458011 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:11 crc kubenswrapper[4728]: I0204 11:59:11.497805 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-ssh-key-openstack-edpm-ipam\") pod \"a257135e-ed50-4619-adf8-c7a29970062c\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " Feb 04 11:59:11 crc kubenswrapper[4728]: I0204 11:59:11.498000 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szr5s\" (UniqueName: \"kubernetes.io/projected/a257135e-ed50-4619-adf8-c7a29970062c-kube-api-access-szr5s\") pod \"a257135e-ed50-4619-adf8-c7a29970062c\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " Feb 04 11:59:11 crc kubenswrapper[4728]: I0204 11:59:11.498106 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-inventory\") pod \"a257135e-ed50-4619-adf8-c7a29970062c\" (UID: \"a257135e-ed50-4619-adf8-c7a29970062c\") " Feb 04 11:59:11 crc kubenswrapper[4728]: I0204 11:59:11.503685 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a257135e-ed50-4619-adf8-c7a29970062c-kube-api-access-szr5s" (OuterVolumeSpecName: "kube-api-access-szr5s") pod "a257135e-ed50-4619-adf8-c7a29970062c" (UID: "a257135e-ed50-4619-adf8-c7a29970062c"). InnerVolumeSpecName "kube-api-access-szr5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:59:11 crc kubenswrapper[4728]: I0204 11:59:11.535099 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-inventory" (OuterVolumeSpecName: "inventory") pod "a257135e-ed50-4619-adf8-c7a29970062c" (UID: "a257135e-ed50-4619-adf8-c7a29970062c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:59:11 crc kubenswrapper[4728]: I0204 11:59:11.536456 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a257135e-ed50-4619-adf8-c7a29970062c" (UID: "a257135e-ed50-4619-adf8-c7a29970062c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:59:11 crc kubenswrapper[4728]: I0204 11:59:11.600493 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 11:59:11 crc kubenswrapper[4728]: I0204 11:59:11.600547 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a257135e-ed50-4619-adf8-c7a29970062c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 11:59:11 crc kubenswrapper[4728]: I0204 11:59:11.600562 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szr5s\" (UniqueName: \"kubernetes.io/projected/a257135e-ed50-4619-adf8-c7a29970062c-kube-api-access-szr5s\") on node \"crc\" DevicePath \"\"" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.042963 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" event={"ID":"a257135e-ed50-4619-adf8-c7a29970062c","Type":"ContainerDied","Data":"bb9d8a87f4083c48fbe84e4c9e4e674979344d264bde5ed3d1765caa3bd3afb1"} Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.043256 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb9d8a87f4083c48fbe84e4c9e4e674979344d264bde5ed3d1765caa3bd3afb1" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.043192 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.110010 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5"] Feb 04 11:59:12 crc kubenswrapper[4728]: E0204 11:59:12.110400 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a257135e-ed50-4619-adf8-c7a29970062c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.110423 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a257135e-ed50-4619-adf8-c7a29970062c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.110658 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a257135e-ed50-4619-adf8-c7a29970062c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.111316 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.113218 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.113906 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.114122 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.114266 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.120517 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5"] Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.211173 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sppp5\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.211286 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5l8z\" (UniqueName: \"kubernetes.io/projected/4e045846-4329-41b7-8a9c-eb84a2231443-kube-api-access-c5l8z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sppp5\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.211364 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sppp5\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.312978 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sppp5\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.313139 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5l8z\" (UniqueName: \"kubernetes.io/projected/4e045846-4329-41b7-8a9c-eb84a2231443-kube-api-access-c5l8z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sppp5\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.313203 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sppp5\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.317317 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sppp5\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.321269 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sppp5\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.331571 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5l8z\" (UniqueName: \"kubernetes.io/projected/4e045846-4329-41b7-8a9c-eb84a2231443-kube-api-access-c5l8z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sppp5\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.427146 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:12 crc kubenswrapper[4728]: I0204 11:59:12.968074 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5"] Feb 04 11:59:13 crc kubenswrapper[4728]: I0204 11:59:13.056608 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6blvv"] Feb 04 11:59:13 crc kubenswrapper[4728]: I0204 11:59:13.061055 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" event={"ID":"4e045846-4329-41b7-8a9c-eb84a2231443","Type":"ContainerStarted","Data":"890cd7ee586d9a43fea1189bcf2e6321ceaa7a26438664e28ecff8ab9965a060"} Feb 04 11:59:13 crc kubenswrapper[4728]: I0204 11:59:13.071815 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6blvv"] Feb 04 11:59:13 crc kubenswrapper[4728]: I0204 11:59:13.565151 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47678b2a-6ab4-4150-b1b8-091d4e500d2e" path="/var/lib/kubelet/pods/47678b2a-6ab4-4150-b1b8-091d4e500d2e/volumes" Feb 04 11:59:14 crc kubenswrapper[4728]: I0204 11:59:14.071124 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" event={"ID":"4e045846-4329-41b7-8a9c-eb84a2231443","Type":"ContainerStarted","Data":"32af55850232bc8771e377a529bf43567e3effd9f5defc87d6189a57564b6c08"} Feb 04 11:59:14 crc kubenswrapper[4728]: I0204 11:59:14.097236 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" podStartSLOduration=1.653948623 podStartE2EDuration="2.09721464s" podCreationTimestamp="2026-02-04 11:59:12 +0000 UTC" firstStartedPulling="2026-02-04 11:59:12.980214865 +0000 UTC m=+1902.122919250" lastFinishedPulling="2026-02-04 11:59:13.423480872 +0000 UTC m=+1902.566185267" observedRunningTime="2026-02-04 11:59:14.092303849 +0000 UTC m=+1903.235008234" watchObservedRunningTime="2026-02-04 11:59:14.09721464 +0000 UTC m=+1903.239919025" Feb 04 11:59:39 crc kubenswrapper[4728]: I0204 11:59:39.635167 4728 scope.go:117] "RemoveContainer" containerID="075327945d66de278c1c9961e1390d4fdf1c1e23fe3e99e878c1a99557a3b814" Feb 04 11:59:47 crc kubenswrapper[4728]: I0204 11:59:47.362809 4728 generic.go:334] "Generic (PLEG): container finished" podID="4e045846-4329-41b7-8a9c-eb84a2231443" containerID="32af55850232bc8771e377a529bf43567e3effd9f5defc87d6189a57564b6c08" exitCode=0 Feb 04 11:59:47 crc kubenswrapper[4728]: I0204 11:59:47.362905 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" event={"ID":"4e045846-4329-41b7-8a9c-eb84a2231443","Type":"ContainerDied","Data":"32af55850232bc8771e377a529bf43567e3effd9f5defc87d6189a57564b6c08"} Feb 04 11:59:48 crc kubenswrapper[4728]: I0204 11:59:48.833172 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:48 crc kubenswrapper[4728]: I0204 11:59:48.950443 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-inventory\") pod \"4e045846-4329-41b7-8a9c-eb84a2231443\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " Feb 04 11:59:48 crc kubenswrapper[4728]: I0204 11:59:48.950494 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5l8z\" (UniqueName: \"kubernetes.io/projected/4e045846-4329-41b7-8a9c-eb84a2231443-kube-api-access-c5l8z\") pod \"4e045846-4329-41b7-8a9c-eb84a2231443\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " Feb 04 11:59:48 crc kubenswrapper[4728]: I0204 11:59:48.950662 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-ssh-key-openstack-edpm-ipam\") pod \"4e045846-4329-41b7-8a9c-eb84a2231443\" (UID: \"4e045846-4329-41b7-8a9c-eb84a2231443\") " Feb 04 11:59:48 crc kubenswrapper[4728]: I0204 11:59:48.956071 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e045846-4329-41b7-8a9c-eb84a2231443-kube-api-access-c5l8z" (OuterVolumeSpecName: "kube-api-access-c5l8z") pod "4e045846-4329-41b7-8a9c-eb84a2231443" (UID: "4e045846-4329-41b7-8a9c-eb84a2231443"). InnerVolumeSpecName "kube-api-access-c5l8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 11:59:48 crc kubenswrapper[4728]: I0204 11:59:48.985078 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4e045846-4329-41b7-8a9c-eb84a2231443" (UID: "4e045846-4329-41b7-8a9c-eb84a2231443"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:59:48 crc kubenswrapper[4728]: I0204 11:59:48.987512 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-inventory" (OuterVolumeSpecName: "inventory") pod "4e045846-4329-41b7-8a9c-eb84a2231443" (UID: "4e045846-4329-41b7-8a9c-eb84a2231443"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.055239 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.055278 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e045846-4329-41b7-8a9c-eb84a2231443-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.055291 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5l8z\" (UniqueName: \"kubernetes.io/projected/4e045846-4329-41b7-8a9c-eb84a2231443-kube-api-access-c5l8z\") on node \"crc\" DevicePath \"\"" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.383530 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" event={"ID":"4e045846-4329-41b7-8a9c-eb84a2231443","Type":"ContainerDied","Data":"890cd7ee586d9a43fea1189bcf2e6321ceaa7a26438664e28ecff8ab9965a060"} Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.383820 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="890cd7ee586d9a43fea1189bcf2e6321ceaa7a26438664e28ecff8ab9965a060" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.383826 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sppp5" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.475210 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk"] Feb 04 11:59:49 crc kubenswrapper[4728]: E0204 11:59:49.475998 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e045846-4329-41b7-8a9c-eb84a2231443" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.476024 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e045846-4329-41b7-8a9c-eb84a2231443" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.476245 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e045846-4329-41b7-8a9c-eb84a2231443" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.477066 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.479147 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.479778 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.479872 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.480958 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.491996 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk"] Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.565419 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h6czk\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.565471 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h6czk\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.565593 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzgxw\" (UniqueName: \"kubernetes.io/projected/2980ce97-200b-40eb-b084-817ac3a421ca-kube-api-access-vzgxw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h6czk\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.667838 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h6czk\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.667906 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h6czk\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.668067 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzgxw\" (UniqueName: \"kubernetes.io/projected/2980ce97-200b-40eb-b084-817ac3a421ca-kube-api-access-vzgxw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h6czk\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.673697 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h6czk\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.683720 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h6czk\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.686946 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzgxw\" (UniqueName: \"kubernetes.io/projected/2980ce97-200b-40eb-b084-817ac3a421ca-kube-api-access-vzgxw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h6czk\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 11:59:49 crc kubenswrapper[4728]: I0204 11:59:49.797204 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 11:59:50 crc kubenswrapper[4728]: I0204 11:59:50.337053 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk"] Feb 04 11:59:50 crc kubenswrapper[4728]: W0204 11:59:50.344039 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2980ce97_200b_40eb_b084_817ac3a421ca.slice/crio-bd54db7117174c2d6469ca80202cbaeaad90e41b914c1ec97c256e589891d408 WatchSource:0}: Error finding container bd54db7117174c2d6469ca80202cbaeaad90e41b914c1ec97c256e589891d408: Status 404 returned error can't find the container with id bd54db7117174c2d6469ca80202cbaeaad90e41b914c1ec97c256e589891d408 Feb 04 11:59:50 crc kubenswrapper[4728]: I0204 11:59:50.399947 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" event={"ID":"2980ce97-200b-40eb-b084-817ac3a421ca","Type":"ContainerStarted","Data":"bd54db7117174c2d6469ca80202cbaeaad90e41b914c1ec97c256e589891d408"} Feb 04 11:59:51 crc kubenswrapper[4728]: I0204 11:59:51.408523 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" event={"ID":"2980ce97-200b-40eb-b084-817ac3a421ca","Type":"ContainerStarted","Data":"d2bc1c71d5288e4c101e65e43e664af56c488407e51a132051d5c353e86a4900"} Feb 04 11:59:51 crc kubenswrapper[4728]: I0204 11:59:51.427050 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" podStartSLOduration=1.904984335 podStartE2EDuration="2.427030261s" podCreationTimestamp="2026-02-04 11:59:49 +0000 UTC" firstStartedPulling="2026-02-04 11:59:50.352603479 +0000 UTC m=+1939.495307854" lastFinishedPulling="2026-02-04 11:59:50.874649385 +0000 UTC m=+1940.017353780" observedRunningTime="2026-02-04 11:59:51.426091058 +0000 UTC m=+1940.568795443" watchObservedRunningTime="2026-02-04 11:59:51.427030261 +0000 UTC m=+1940.569734646" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.163900 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw"] Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.165842 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.167936 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.167990 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.182189 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw"] Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.267001 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f47r\" (UniqueName: \"kubernetes.io/projected/a6273ef3-9992-44a4-8447-29d22c90fab9-kube-api-access-8f47r\") pod \"collect-profiles-29503440-m65nw\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.267153 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6273ef3-9992-44a4-8447-29d22c90fab9-config-volume\") pod \"collect-profiles-29503440-m65nw\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.267189 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6273ef3-9992-44a4-8447-29d22c90fab9-secret-volume\") pod \"collect-profiles-29503440-m65nw\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.369111 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6273ef3-9992-44a4-8447-29d22c90fab9-config-volume\") pod \"collect-profiles-29503440-m65nw\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.369313 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6273ef3-9992-44a4-8447-29d22c90fab9-secret-volume\") pod \"collect-profiles-29503440-m65nw\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.369567 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f47r\" (UniqueName: \"kubernetes.io/projected/a6273ef3-9992-44a4-8447-29d22c90fab9-kube-api-access-8f47r\") pod \"collect-profiles-29503440-m65nw\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.370314 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6273ef3-9992-44a4-8447-29d22c90fab9-config-volume\") pod \"collect-profiles-29503440-m65nw\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.375116 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6273ef3-9992-44a4-8447-29d22c90fab9-secret-volume\") pod \"collect-profiles-29503440-m65nw\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.390069 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f47r\" (UniqueName: \"kubernetes.io/projected/a6273ef3-9992-44a4-8447-29d22c90fab9-kube-api-access-8f47r\") pod \"collect-profiles-29503440-m65nw\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.492881 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:00 crc kubenswrapper[4728]: I0204 12:00:00.966636 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw"] Feb 04 12:00:01 crc kubenswrapper[4728]: I0204 12:00:01.494856 4728 generic.go:334] "Generic (PLEG): container finished" podID="a6273ef3-9992-44a4-8447-29d22c90fab9" containerID="ae43b682eaaa7e03816090627cb9b1466cf41dee4775f4c0e6e5f4d0ba4e5b75" exitCode=0 Feb 04 12:00:01 crc kubenswrapper[4728]: I0204 12:00:01.495023 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" event={"ID":"a6273ef3-9992-44a4-8447-29d22c90fab9","Type":"ContainerDied","Data":"ae43b682eaaa7e03816090627cb9b1466cf41dee4775f4c0e6e5f4d0ba4e5b75"} Feb 04 12:00:01 crc kubenswrapper[4728]: I0204 12:00:01.495219 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" event={"ID":"a6273ef3-9992-44a4-8447-29d22c90fab9","Type":"ContainerStarted","Data":"599dc097306c3c75932577c7df6c91ce73a8751cc5c9d610ca5f5ceb61ff4b79"} Feb 04 12:00:02 crc kubenswrapper[4728]: I0204 12:00:02.811278 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:02 crc kubenswrapper[4728]: I0204 12:00:02.916772 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f47r\" (UniqueName: \"kubernetes.io/projected/a6273ef3-9992-44a4-8447-29d22c90fab9-kube-api-access-8f47r\") pod \"a6273ef3-9992-44a4-8447-29d22c90fab9\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " Feb 04 12:00:02 crc kubenswrapper[4728]: I0204 12:00:02.916834 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6273ef3-9992-44a4-8447-29d22c90fab9-config-volume\") pod \"a6273ef3-9992-44a4-8447-29d22c90fab9\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " Feb 04 12:00:02 crc kubenswrapper[4728]: I0204 12:00:02.916926 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6273ef3-9992-44a4-8447-29d22c90fab9-secret-volume\") pod \"a6273ef3-9992-44a4-8447-29d22c90fab9\" (UID: \"a6273ef3-9992-44a4-8447-29d22c90fab9\") " Feb 04 12:00:02 crc kubenswrapper[4728]: I0204 12:00:02.917351 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6273ef3-9992-44a4-8447-29d22c90fab9-config-volume" (OuterVolumeSpecName: "config-volume") pod "a6273ef3-9992-44a4-8447-29d22c90fab9" (UID: "a6273ef3-9992-44a4-8447-29d22c90fab9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:00:02 crc kubenswrapper[4728]: I0204 12:00:02.921478 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6273ef3-9992-44a4-8447-29d22c90fab9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a6273ef3-9992-44a4-8447-29d22c90fab9" (UID: "a6273ef3-9992-44a4-8447-29d22c90fab9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:00:02 crc kubenswrapper[4728]: I0204 12:00:02.921999 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6273ef3-9992-44a4-8447-29d22c90fab9-kube-api-access-8f47r" (OuterVolumeSpecName: "kube-api-access-8f47r") pod "a6273ef3-9992-44a4-8447-29d22c90fab9" (UID: "a6273ef3-9992-44a4-8447-29d22c90fab9"). InnerVolumeSpecName "kube-api-access-8f47r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:00:03 crc kubenswrapper[4728]: I0204 12:00:03.019162 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6273ef3-9992-44a4-8447-29d22c90fab9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:03 crc kubenswrapper[4728]: I0204 12:00:03.019196 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6273ef3-9992-44a4-8447-29d22c90fab9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:03 crc kubenswrapper[4728]: I0204 12:00:03.019206 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f47r\" (UniqueName: \"kubernetes.io/projected/a6273ef3-9992-44a4-8447-29d22c90fab9-kube-api-access-8f47r\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:03 crc kubenswrapper[4728]: I0204 12:00:03.513232 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" event={"ID":"a6273ef3-9992-44a4-8447-29d22c90fab9","Type":"ContainerDied","Data":"599dc097306c3c75932577c7df6c91ce73a8751cc5c9d610ca5f5ceb61ff4b79"} Feb 04 12:00:03 crc kubenswrapper[4728]: I0204 12:00:03.513263 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw" Feb 04 12:00:03 crc kubenswrapper[4728]: I0204 12:00:03.513279 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="599dc097306c3c75932577c7df6c91ce73a8751cc5c9d610ca5f5ceb61ff4b79" Feb 04 12:00:24 crc kubenswrapper[4728]: I0204 12:00:24.899926 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pb72j"] Feb 04 12:00:24 crc kubenswrapper[4728]: E0204 12:00:24.900973 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6273ef3-9992-44a4-8447-29d22c90fab9" containerName="collect-profiles" Feb 04 12:00:24 crc kubenswrapper[4728]: I0204 12:00:24.900990 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6273ef3-9992-44a4-8447-29d22c90fab9" containerName="collect-profiles" Feb 04 12:00:24 crc kubenswrapper[4728]: I0204 12:00:24.901275 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6273ef3-9992-44a4-8447-29d22c90fab9" containerName="collect-profiles" Feb 04 12:00:24 crc kubenswrapper[4728]: I0204 12:00:24.903073 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:24 crc kubenswrapper[4728]: I0204 12:00:24.908213 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pb72j"] Feb 04 12:00:25 crc kubenswrapper[4728]: I0204 12:00:25.039735 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-catalog-content\") pod \"redhat-operators-pb72j\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:25 crc kubenswrapper[4728]: I0204 12:00:25.039789 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55scw\" (UniqueName: \"kubernetes.io/projected/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-kube-api-access-55scw\") pod \"redhat-operators-pb72j\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:25 crc kubenswrapper[4728]: I0204 12:00:25.039857 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-utilities\") pod \"redhat-operators-pb72j\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:25 crc kubenswrapper[4728]: I0204 12:00:25.141742 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-catalog-content\") pod \"redhat-operators-pb72j\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:25 crc kubenswrapper[4728]: I0204 12:00:25.141813 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55scw\" (UniqueName: \"kubernetes.io/projected/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-kube-api-access-55scw\") pod \"redhat-operators-pb72j\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:25 crc kubenswrapper[4728]: I0204 12:00:25.141918 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-utilities\") pod \"redhat-operators-pb72j\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:25 crc kubenswrapper[4728]: I0204 12:00:25.142236 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-catalog-content\") pod \"redhat-operators-pb72j\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:25 crc kubenswrapper[4728]: I0204 12:00:25.142346 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-utilities\") pod \"redhat-operators-pb72j\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:25 crc kubenswrapper[4728]: I0204 12:00:25.161056 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55scw\" (UniqueName: \"kubernetes.io/projected/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-kube-api-access-55scw\") pod \"redhat-operators-pb72j\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:25 crc kubenswrapper[4728]: I0204 12:00:25.227276 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:25 crc kubenswrapper[4728]: I0204 12:00:25.706451 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pb72j"] Feb 04 12:00:25 crc kubenswrapper[4728]: W0204 12:00:25.707203 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe57d2f3_f188_49ae_bd9a_659bc182ee4b.slice/crio-19bb776389e94732adf74079b402e2a874032ebc7cc3f2b880ba27649687761f WatchSource:0}: Error finding container 19bb776389e94732adf74079b402e2a874032ebc7cc3f2b880ba27649687761f: Status 404 returned error can't find the container with id 19bb776389e94732adf74079b402e2a874032ebc7cc3f2b880ba27649687761f Feb 04 12:00:26 crc kubenswrapper[4728]: I0204 12:00:26.712314 4728 generic.go:334] "Generic (PLEG): container finished" podID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerID="1b40484b943941fd649eaf65872463ca6275d4d9a31509504a35149d569bb0c3" exitCode=0 Feb 04 12:00:26 crc kubenswrapper[4728]: I0204 12:00:26.712399 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb72j" event={"ID":"fe57d2f3-f188-49ae-bd9a-659bc182ee4b","Type":"ContainerDied","Data":"1b40484b943941fd649eaf65872463ca6275d4d9a31509504a35149d569bb0c3"} Feb 04 12:00:26 crc kubenswrapper[4728]: I0204 12:00:26.716061 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb72j" event={"ID":"fe57d2f3-f188-49ae-bd9a-659bc182ee4b","Type":"ContainerStarted","Data":"19bb776389e94732adf74079b402e2a874032ebc7cc3f2b880ba27649687761f"} Feb 04 12:00:27 crc kubenswrapper[4728]: I0204 12:00:27.725383 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb72j" event={"ID":"fe57d2f3-f188-49ae-bd9a-659bc182ee4b","Type":"ContainerStarted","Data":"499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa"} Feb 04 12:00:29 crc kubenswrapper[4728]: I0204 12:00:29.749668 4728 generic.go:334] "Generic (PLEG): container finished" podID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerID="499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa" exitCode=0 Feb 04 12:00:29 crc kubenswrapper[4728]: I0204 12:00:29.749741 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb72j" event={"ID":"fe57d2f3-f188-49ae-bd9a-659bc182ee4b","Type":"ContainerDied","Data":"499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa"} Feb 04 12:00:29 crc kubenswrapper[4728]: I0204 12:00:29.753072 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 12:00:30 crc kubenswrapper[4728]: I0204 12:00:30.762303 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb72j" event={"ID":"fe57d2f3-f188-49ae-bd9a-659bc182ee4b","Type":"ContainerStarted","Data":"a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340"} Feb 04 12:00:30 crc kubenswrapper[4728]: I0204 12:00:30.796441 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pb72j" podStartSLOduration=3.047739475 podStartE2EDuration="6.796415209s" podCreationTimestamp="2026-02-04 12:00:24 +0000 UTC" firstStartedPulling="2026-02-04 12:00:26.716993951 +0000 UTC m=+1975.859698346" lastFinishedPulling="2026-02-04 12:00:30.465669685 +0000 UTC m=+1979.608374080" observedRunningTime="2026-02-04 12:00:30.781587083 +0000 UTC m=+1979.924291478" watchObservedRunningTime="2026-02-04 12:00:30.796415209 +0000 UTC m=+1979.939119624" Feb 04 12:00:35 crc kubenswrapper[4728]: I0204 12:00:35.227788 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:35 crc kubenswrapper[4728]: I0204 12:00:35.228480 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:35 crc kubenswrapper[4728]: I0204 12:00:35.448601 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:00:35 crc kubenswrapper[4728]: I0204 12:00:35.448937 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:00:36 crc kubenswrapper[4728]: I0204 12:00:36.283994 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pb72j" podUID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerName="registry-server" probeResult="failure" output=< Feb 04 12:00:36 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 04 12:00:36 crc kubenswrapper[4728]: > Feb 04 12:00:36 crc kubenswrapper[4728]: I0204 12:00:36.809387 4728 generic.go:334] "Generic (PLEG): container finished" podID="2980ce97-200b-40eb-b084-817ac3a421ca" containerID="d2bc1c71d5288e4c101e65e43e664af56c488407e51a132051d5c353e86a4900" exitCode=0 Feb 04 12:00:36 crc kubenswrapper[4728]: I0204 12:00:36.809474 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" event={"ID":"2980ce97-200b-40eb-b084-817ac3a421ca","Type":"ContainerDied","Data":"d2bc1c71d5288e4c101e65e43e664af56c488407e51a132051d5c353e86a4900"} Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.263844 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.304943 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzgxw\" (UniqueName: \"kubernetes.io/projected/2980ce97-200b-40eb-b084-817ac3a421ca-kube-api-access-vzgxw\") pod \"2980ce97-200b-40eb-b084-817ac3a421ca\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.304988 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-ssh-key-openstack-edpm-ipam\") pod \"2980ce97-200b-40eb-b084-817ac3a421ca\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.305156 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-inventory\") pod \"2980ce97-200b-40eb-b084-817ac3a421ca\" (UID: \"2980ce97-200b-40eb-b084-817ac3a421ca\") " Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.310220 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2980ce97-200b-40eb-b084-817ac3a421ca-kube-api-access-vzgxw" (OuterVolumeSpecName: "kube-api-access-vzgxw") pod "2980ce97-200b-40eb-b084-817ac3a421ca" (UID: "2980ce97-200b-40eb-b084-817ac3a421ca"). InnerVolumeSpecName "kube-api-access-vzgxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.330510 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2980ce97-200b-40eb-b084-817ac3a421ca" (UID: "2980ce97-200b-40eb-b084-817ac3a421ca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.332657 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-inventory" (OuterVolumeSpecName: "inventory") pod "2980ce97-200b-40eb-b084-817ac3a421ca" (UID: "2980ce97-200b-40eb-b084-817ac3a421ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.407500 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzgxw\" (UniqueName: \"kubernetes.io/projected/2980ce97-200b-40eb-b084-817ac3a421ca-kube-api-access-vzgxw\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.407560 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.407573 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2980ce97-200b-40eb-b084-817ac3a421ca-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.828272 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" event={"ID":"2980ce97-200b-40eb-b084-817ac3a421ca","Type":"ContainerDied","Data":"bd54db7117174c2d6469ca80202cbaeaad90e41b914c1ec97c256e589891d408"} Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.828591 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd54db7117174c2d6469ca80202cbaeaad90e41b914c1ec97c256e589891d408" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.828301 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h6czk" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.912917 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bt5ct"] Feb 04 12:00:38 crc kubenswrapper[4728]: E0204 12:00:38.913563 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2980ce97-200b-40eb-b084-817ac3a421ca" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.913679 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2980ce97-200b-40eb-b084-817ac3a421ca" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.913996 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2980ce97-200b-40eb-b084-817ac3a421ca" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.914704 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.916601 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.916692 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.924267 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.924275 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 12:00:38 crc kubenswrapper[4728]: I0204 12:00:38.933047 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bt5ct"] Feb 04 12:00:39 crc kubenswrapper[4728]: I0204 12:00:39.018601 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mdvz\" (UniqueName: \"kubernetes.io/projected/7d180f53-1c00-4628-bd93-c3b5646307fd-kube-api-access-5mdvz\") pod \"ssh-known-hosts-edpm-deployment-bt5ct\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:39 crc kubenswrapper[4728]: I0204 12:00:39.018827 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bt5ct\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:39 crc kubenswrapper[4728]: I0204 12:00:39.018924 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bt5ct\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:39 crc kubenswrapper[4728]: I0204 12:00:39.121526 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bt5ct\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:39 crc kubenswrapper[4728]: I0204 12:00:39.121837 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mdvz\" (UniqueName: \"kubernetes.io/projected/7d180f53-1c00-4628-bd93-c3b5646307fd-kube-api-access-5mdvz\") pod \"ssh-known-hosts-edpm-deployment-bt5ct\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:39 crc kubenswrapper[4728]: I0204 12:00:39.122017 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bt5ct\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:39 crc kubenswrapper[4728]: I0204 12:00:39.126314 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bt5ct\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:39 crc kubenswrapper[4728]: I0204 12:00:39.140287 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bt5ct\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:39 crc kubenswrapper[4728]: I0204 12:00:39.148660 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mdvz\" (UniqueName: \"kubernetes.io/projected/7d180f53-1c00-4628-bd93-c3b5646307fd-kube-api-access-5mdvz\") pod \"ssh-known-hosts-edpm-deployment-bt5ct\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:39 crc kubenswrapper[4728]: I0204 12:00:39.232356 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:39 crc kubenswrapper[4728]: I0204 12:00:39.850576 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bt5ct"] Feb 04 12:00:40 crc kubenswrapper[4728]: I0204 12:00:40.847312 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" event={"ID":"7d180f53-1c00-4628-bd93-c3b5646307fd","Type":"ContainerStarted","Data":"b718dbc4be3527252ef0e8a5986e5e4049a16eaf953383fc936288e80fe6d0a7"} Feb 04 12:00:40 crc kubenswrapper[4728]: I0204 12:00:40.847738 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" event={"ID":"7d180f53-1c00-4628-bd93-c3b5646307fd","Type":"ContainerStarted","Data":"f235b4a66f9d29f9c492d8fbb4834e10c1050dfdb69b876644a604d067f907a5"} Feb 04 12:00:40 crc kubenswrapper[4728]: I0204 12:00:40.868993 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" podStartSLOduration=2.388896602 podStartE2EDuration="2.868971375s" podCreationTimestamp="2026-02-04 12:00:38 +0000 UTC" firstStartedPulling="2026-02-04 12:00:39.860125807 +0000 UTC m=+1989.002830192" lastFinishedPulling="2026-02-04 12:00:40.34020057 +0000 UTC m=+1989.482904965" observedRunningTime="2026-02-04 12:00:40.867254602 +0000 UTC m=+1990.009958977" watchObservedRunningTime="2026-02-04 12:00:40.868971375 +0000 UTC m=+1990.011675790" Feb 04 12:00:45 crc kubenswrapper[4728]: I0204 12:00:45.287887 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:45 crc kubenswrapper[4728]: I0204 12:00:45.372665 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:45 crc kubenswrapper[4728]: I0204 12:00:45.526841 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pb72j"] Feb 04 12:00:46 crc kubenswrapper[4728]: I0204 12:00:46.911781 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pb72j" podUID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerName="registry-server" containerID="cri-o://a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340" gracePeriod=2 Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.368258 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.393745 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-catalog-content\") pod \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.393949 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-utilities\") pod \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.394016 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55scw\" (UniqueName: \"kubernetes.io/projected/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-kube-api-access-55scw\") pod \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\" (UID: \"fe57d2f3-f188-49ae-bd9a-659bc182ee4b\") " Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.394976 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-utilities" (OuterVolumeSpecName: "utilities") pod "fe57d2f3-f188-49ae-bd9a-659bc182ee4b" (UID: "fe57d2f3-f188-49ae-bd9a-659bc182ee4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.402274 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-kube-api-access-55scw" (OuterVolumeSpecName: "kube-api-access-55scw") pod "fe57d2f3-f188-49ae-bd9a-659bc182ee4b" (UID: "fe57d2f3-f188-49ae-bd9a-659bc182ee4b"). InnerVolumeSpecName "kube-api-access-55scw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.496620 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.496660 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55scw\" (UniqueName: \"kubernetes.io/projected/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-kube-api-access-55scw\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.514818 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe57d2f3-f188-49ae-bd9a-659bc182ee4b" (UID: "fe57d2f3-f188-49ae-bd9a-659bc182ee4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.598846 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe57d2f3-f188-49ae-bd9a-659bc182ee4b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.923498 4728 generic.go:334] "Generic (PLEG): container finished" podID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerID="a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340" exitCode=0 Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.923590 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb72j" event={"ID":"fe57d2f3-f188-49ae-bd9a-659bc182ee4b","Type":"ContainerDied","Data":"a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340"} Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.923638 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pb72j" event={"ID":"fe57d2f3-f188-49ae-bd9a-659bc182ee4b","Type":"ContainerDied","Data":"19bb776389e94732adf74079b402e2a874032ebc7cc3f2b880ba27649687761f"} Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.923643 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pb72j" Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.923694 4728 scope.go:117] "RemoveContainer" containerID="a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340" Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.928639 4728 generic.go:334] "Generic (PLEG): container finished" podID="7d180f53-1c00-4628-bd93-c3b5646307fd" containerID="b718dbc4be3527252ef0e8a5986e5e4049a16eaf953383fc936288e80fe6d0a7" exitCode=0 Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.928688 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" event={"ID":"7d180f53-1c00-4628-bd93-c3b5646307fd","Type":"ContainerDied","Data":"b718dbc4be3527252ef0e8a5986e5e4049a16eaf953383fc936288e80fe6d0a7"} Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.956695 4728 scope.go:117] "RemoveContainer" containerID="499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa" Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.975915 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pb72j"] Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.986971 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pb72j"] Feb 04 12:00:47 crc kubenswrapper[4728]: I0204 12:00:47.988128 4728 scope.go:117] "RemoveContainer" containerID="1b40484b943941fd649eaf65872463ca6275d4d9a31509504a35149d569bb0c3" Feb 04 12:00:48 crc kubenswrapper[4728]: I0204 12:00:48.042483 4728 scope.go:117] "RemoveContainer" containerID="a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340" Feb 04 12:00:48 crc kubenswrapper[4728]: E0204 12:00:48.042949 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340\": container with ID starting with a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340 not found: ID does not exist" containerID="a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340" Feb 04 12:00:48 crc kubenswrapper[4728]: I0204 12:00:48.042994 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340"} err="failed to get container status \"a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340\": rpc error: code = NotFound desc = could not find container \"a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340\": container with ID starting with a47bd3b8eeeee219ae8a9dff82260b71b2b8c01b3f471f89d97b535160cdc340 not found: ID does not exist" Feb 04 12:00:48 crc kubenswrapper[4728]: I0204 12:00:48.043018 4728 scope.go:117] "RemoveContainer" containerID="499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa" Feb 04 12:00:48 crc kubenswrapper[4728]: E0204 12:00:48.043438 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa\": container with ID starting with 499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa not found: ID does not exist" containerID="499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa" Feb 04 12:00:48 crc kubenswrapper[4728]: I0204 12:00:48.043468 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa"} err="failed to get container status \"499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa\": rpc error: code = NotFound desc = could not find container \"499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa\": container with ID starting with 499dd9b717cbb1bef5f8723247df5c18501246af2f946c336c76b103ad6690aa not found: ID does not exist" Feb 04 12:00:48 crc kubenswrapper[4728]: I0204 12:00:48.043488 4728 scope.go:117] "RemoveContainer" containerID="1b40484b943941fd649eaf65872463ca6275d4d9a31509504a35149d569bb0c3" Feb 04 12:00:48 crc kubenswrapper[4728]: E0204 12:00:48.043803 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b40484b943941fd649eaf65872463ca6275d4d9a31509504a35149d569bb0c3\": container with ID starting with 1b40484b943941fd649eaf65872463ca6275d4d9a31509504a35149d569bb0c3 not found: ID does not exist" containerID="1b40484b943941fd649eaf65872463ca6275d4d9a31509504a35149d569bb0c3" Feb 04 12:00:48 crc kubenswrapper[4728]: I0204 12:00:48.043851 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b40484b943941fd649eaf65872463ca6275d4d9a31509504a35149d569bb0c3"} err="failed to get container status \"1b40484b943941fd649eaf65872463ca6275d4d9a31509504a35149d569bb0c3\": rpc error: code = NotFound desc = could not find container \"1b40484b943941fd649eaf65872463ca6275d4d9a31509504a35149d569bb0c3\": container with ID starting with 1b40484b943941fd649eaf65872463ca6275d4d9a31509504a35149d569bb0c3 not found: ID does not exist" Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.359713 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.432345 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mdvz\" (UniqueName: \"kubernetes.io/projected/7d180f53-1c00-4628-bd93-c3b5646307fd-kube-api-access-5mdvz\") pod \"7d180f53-1c00-4628-bd93-c3b5646307fd\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.432438 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-ssh-key-openstack-edpm-ipam\") pod \"7d180f53-1c00-4628-bd93-c3b5646307fd\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.432583 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-inventory-0\") pod \"7d180f53-1c00-4628-bd93-c3b5646307fd\" (UID: \"7d180f53-1c00-4628-bd93-c3b5646307fd\") " Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.438225 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d180f53-1c00-4628-bd93-c3b5646307fd-kube-api-access-5mdvz" (OuterVolumeSpecName: "kube-api-access-5mdvz") pod "7d180f53-1c00-4628-bd93-c3b5646307fd" (UID: "7d180f53-1c00-4628-bd93-c3b5646307fd"). InnerVolumeSpecName "kube-api-access-5mdvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.462878 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7d180f53-1c00-4628-bd93-c3b5646307fd" (UID: "7d180f53-1c00-4628-bd93-c3b5646307fd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.468280 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "7d180f53-1c00-4628-bd93-c3b5646307fd" (UID: "7d180f53-1c00-4628-bd93-c3b5646307fd"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.535253 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.535300 4728 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7d180f53-1c00-4628-bd93-c3b5646307fd-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.535311 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mdvz\" (UniqueName: \"kubernetes.io/projected/7d180f53-1c00-4628-bd93-c3b5646307fd-kube-api-access-5mdvz\") on node \"crc\" DevicePath \"\"" Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.568499 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" path="/var/lib/kubelet/pods/fe57d2f3-f188-49ae-bd9a-659bc182ee4b/volumes" Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.956390 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.956361 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bt5ct" event={"ID":"7d180f53-1c00-4628-bd93-c3b5646307fd","Type":"ContainerDied","Data":"f235b4a66f9d29f9c492d8fbb4834e10c1050dfdb69b876644a604d067f907a5"} Feb 04 12:00:49 crc kubenswrapper[4728]: I0204 12:00:49.956469 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f235b4a66f9d29f9c492d8fbb4834e10c1050dfdb69b876644a604d067f907a5" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.026061 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl"] Feb 04 12:00:50 crc kubenswrapper[4728]: E0204 12:00:50.026583 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerName="registry-server" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.026606 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerName="registry-server" Feb 04 12:00:50 crc kubenswrapper[4728]: E0204 12:00:50.026622 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerName="extract-utilities" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.026630 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerName="extract-utilities" Feb 04 12:00:50 crc kubenswrapper[4728]: E0204 12:00:50.026650 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerName="extract-content" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.026659 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerName="extract-content" Feb 04 12:00:50 crc kubenswrapper[4728]: E0204 12:00:50.026684 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d180f53-1c00-4628-bd93-c3b5646307fd" containerName="ssh-known-hosts-edpm-deployment" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.026693 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d180f53-1c00-4628-bd93-c3b5646307fd" containerName="ssh-known-hosts-edpm-deployment" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.026970 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe57d2f3-f188-49ae-bd9a-659bc182ee4b" containerName="registry-server" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.026999 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d180f53-1c00-4628-bd93-c3b5646307fd" containerName="ssh-known-hosts-edpm-deployment" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.027838 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.030774 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.030944 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.031136 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.031141 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.035743 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl"] Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.145844 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zxkjl\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.146036 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zxkjl\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.146349 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz46x\" (UniqueName: \"kubernetes.io/projected/d18d7dd8-c0a7-4f82-87d5-415841b53578-kube-api-access-qz46x\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zxkjl\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.248661 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zxkjl\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.248784 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz46x\" (UniqueName: \"kubernetes.io/projected/d18d7dd8-c0a7-4f82-87d5-415841b53578-kube-api-access-qz46x\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zxkjl\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.248826 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zxkjl\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.259265 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zxkjl\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.259353 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zxkjl\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.266387 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz46x\" (UniqueName: \"kubernetes.io/projected/d18d7dd8-c0a7-4f82-87d5-415841b53578-kube-api-access-qz46x\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zxkjl\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.342720 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:00:50 crc kubenswrapper[4728]: W0204 12:00:50.910673 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd18d7dd8_c0a7_4f82_87d5_415841b53578.slice/crio-3534f9f2c3bc4e81d18d7957f0f71f5adeb8719fb3eaabebec0b0a3e66ad40e0 WatchSource:0}: Error finding container 3534f9f2c3bc4e81d18d7957f0f71f5adeb8719fb3eaabebec0b0a3e66ad40e0: Status 404 returned error can't find the container with id 3534f9f2c3bc4e81d18d7957f0f71f5adeb8719fb3eaabebec0b0a3e66ad40e0 Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.913420 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl"] Feb 04 12:00:50 crc kubenswrapper[4728]: I0204 12:00:50.968928 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" event={"ID":"d18d7dd8-c0a7-4f82-87d5-415841b53578","Type":"ContainerStarted","Data":"3534f9f2c3bc4e81d18d7957f0f71f5adeb8719fb3eaabebec0b0a3e66ad40e0"} Feb 04 12:00:51 crc kubenswrapper[4728]: I0204 12:00:51.979357 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" event={"ID":"d18d7dd8-c0a7-4f82-87d5-415841b53578","Type":"ContainerStarted","Data":"1343046a76277e2a8dc9c980ee959f5885b4c19939a5a6e449514300c25d47d1"} Feb 04 12:00:52 crc kubenswrapper[4728]: I0204 12:00:52.003005 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" podStartSLOduration=1.410601384 podStartE2EDuration="2.002980996s" podCreationTimestamp="2026-02-04 12:00:50 +0000 UTC" firstStartedPulling="2026-02-04 12:00:50.913211013 +0000 UTC m=+2000.055915398" lastFinishedPulling="2026-02-04 12:00:51.505590615 +0000 UTC m=+2000.648295010" observedRunningTime="2026-02-04 12:00:51.998782733 +0000 UTC m=+2001.141487238" watchObservedRunningTime="2026-02-04 12:00:52.002980996 +0000 UTC m=+2001.145685391" Feb 04 12:00:59 crc kubenswrapper[4728]: I0204 12:00:59.043547 4728 generic.go:334] "Generic (PLEG): container finished" podID="d18d7dd8-c0a7-4f82-87d5-415841b53578" containerID="1343046a76277e2a8dc9c980ee959f5885b4c19939a5a6e449514300c25d47d1" exitCode=0 Feb 04 12:00:59 crc kubenswrapper[4728]: I0204 12:00:59.043626 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" event={"ID":"d18d7dd8-c0a7-4f82-87d5-415841b53578","Type":"ContainerDied","Data":"1343046a76277e2a8dc9c980ee959f5885b4c19939a5a6e449514300c25d47d1"} Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.144531 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29503441-7qplb"] Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.147714 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.164196 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29503441-7qplb"] Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.237666 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-combined-ca-bundle\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.237732 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-fernet-keys\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.237865 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-config-data\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.237898 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtw55\" (UniqueName: \"kubernetes.io/projected/04c63d3d-617e-4b87-aa1f-1093a356ca44-kube-api-access-rtw55\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.339115 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtw55\" (UniqueName: \"kubernetes.io/projected/04c63d3d-617e-4b87-aa1f-1093a356ca44-kube-api-access-rtw55\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.339424 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-combined-ca-bundle\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.339471 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-fernet-keys\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.339546 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-config-data\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.345181 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-config-data\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.346057 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-fernet-keys\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.346169 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-combined-ca-bundle\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.358677 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtw55\" (UniqueName: \"kubernetes.io/projected/04c63d3d-617e-4b87-aa1f-1093a356ca44-kube-api-access-rtw55\") pod \"keystone-cron-29503441-7qplb\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.438977 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.473520 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.541798 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz46x\" (UniqueName: \"kubernetes.io/projected/d18d7dd8-c0a7-4f82-87d5-415841b53578-kube-api-access-qz46x\") pod \"d18d7dd8-c0a7-4f82-87d5-415841b53578\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.541890 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-inventory\") pod \"d18d7dd8-c0a7-4f82-87d5-415841b53578\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.541974 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-ssh-key-openstack-edpm-ipam\") pod \"d18d7dd8-c0a7-4f82-87d5-415841b53578\" (UID: \"d18d7dd8-c0a7-4f82-87d5-415841b53578\") " Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.548509 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18d7dd8-c0a7-4f82-87d5-415841b53578-kube-api-access-qz46x" (OuterVolumeSpecName: "kube-api-access-qz46x") pod "d18d7dd8-c0a7-4f82-87d5-415841b53578" (UID: "d18d7dd8-c0a7-4f82-87d5-415841b53578"). InnerVolumeSpecName "kube-api-access-qz46x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.572870 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d18d7dd8-c0a7-4f82-87d5-415841b53578" (UID: "d18d7dd8-c0a7-4f82-87d5-415841b53578"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.582984 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-inventory" (OuterVolumeSpecName: "inventory") pod "d18d7dd8-c0a7-4f82-87d5-415841b53578" (UID: "d18d7dd8-c0a7-4f82-87d5-415841b53578"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.645071 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.645126 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz46x\" (UniqueName: \"kubernetes.io/projected/d18d7dd8-c0a7-4f82-87d5-415841b53578-kube-api-access-qz46x\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.645140 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d18d7dd8-c0a7-4f82-87d5-415841b53578-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:00 crc kubenswrapper[4728]: I0204 12:01:00.895182 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29503441-7qplb"] Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.065819 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" event={"ID":"d18d7dd8-c0a7-4f82-87d5-415841b53578","Type":"ContainerDied","Data":"3534f9f2c3bc4e81d18d7957f0f71f5adeb8719fb3eaabebec0b0a3e66ad40e0"} Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.066125 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3534f9f2c3bc4e81d18d7957f0f71f5adeb8719fb3eaabebec0b0a3e66ad40e0" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.066034 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zxkjl" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.067153 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503441-7qplb" event={"ID":"04c63d3d-617e-4b87-aa1f-1093a356ca44","Type":"ContainerStarted","Data":"8766c3f2e211c75505021fc456b3afe9e1e743a026c0d66f1cb4b868890a5717"} Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.154246 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl"] Feb 04 12:01:01 crc kubenswrapper[4728]: E0204 12:01:01.155023 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18d7dd8-c0a7-4f82-87d5-415841b53578" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.155038 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18d7dd8-c0a7-4f82-87d5-415841b53578" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.155370 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18d7dd8-c0a7-4f82-87d5-415841b53578" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.156227 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.158183 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.159195 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.159395 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.159538 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.165717 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl"] Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.257611 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6rz\" (UniqueName: \"kubernetes.io/projected/3e1c830b-d255-42d9-843e-80bee025b267-kube-api-access-gm6rz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.257665 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.257842 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.361230 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.361345 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6rz\" (UniqueName: \"kubernetes.io/projected/3e1c830b-d255-42d9-843e-80bee025b267-kube-api-access-gm6rz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.361371 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.367034 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.370197 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.380669 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6rz\" (UniqueName: \"kubernetes.io/projected/3e1c830b-d255-42d9-843e-80bee025b267-kube-api-access-gm6rz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:01 crc kubenswrapper[4728]: I0204 12:01:01.501207 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:02 crc kubenswrapper[4728]: I0204 12:01:02.030860 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl"] Feb 04 12:01:02 crc kubenswrapper[4728]: W0204 12:01:02.037390 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e1c830b_d255_42d9_843e_80bee025b267.slice/crio-048784660f62ef407c6c177858f31bd2d7b7a94c8b057acd5f5a513dfd06fee4 WatchSource:0}: Error finding container 048784660f62ef407c6c177858f31bd2d7b7a94c8b057acd5f5a513dfd06fee4: Status 404 returned error can't find the container with id 048784660f62ef407c6c177858f31bd2d7b7a94c8b057acd5f5a513dfd06fee4 Feb 04 12:01:02 crc kubenswrapper[4728]: I0204 12:01:02.076486 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" event={"ID":"3e1c830b-d255-42d9-843e-80bee025b267","Type":"ContainerStarted","Data":"048784660f62ef407c6c177858f31bd2d7b7a94c8b057acd5f5a513dfd06fee4"} Feb 04 12:01:02 crc kubenswrapper[4728]: I0204 12:01:02.077909 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503441-7qplb" event={"ID":"04c63d3d-617e-4b87-aa1f-1093a356ca44","Type":"ContainerStarted","Data":"af5bd541f2bdf7c19aa2434f31db07654167b8e2e041e9c37d25abedcd9594e9"} Feb 04 12:01:03 crc kubenswrapper[4728]: I0204 12:01:03.088559 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" event={"ID":"3e1c830b-d255-42d9-843e-80bee025b267","Type":"ContainerStarted","Data":"71b890790c300368e47fd9c9d0774689c535b181e55e9fee0444e099e9d27dea"} Feb 04 12:01:03 crc kubenswrapper[4728]: I0204 12:01:03.107599 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29503441-7qplb" podStartSLOduration=3.10758025 podStartE2EDuration="3.10758025s" podCreationTimestamp="2026-02-04 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 12:01:02.094563451 +0000 UTC m=+2011.237267846" watchObservedRunningTime="2026-02-04 12:01:03.10758025 +0000 UTC m=+2012.250284635" Feb 04 12:01:03 crc kubenswrapper[4728]: I0204 12:01:03.111057 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" podStartSLOduration=1.484999548 podStartE2EDuration="2.111042385s" podCreationTimestamp="2026-02-04 12:01:01 +0000 UTC" firstStartedPulling="2026-02-04 12:01:02.039945461 +0000 UTC m=+2011.182649866" lastFinishedPulling="2026-02-04 12:01:02.665988318 +0000 UTC m=+2011.808692703" observedRunningTime="2026-02-04 12:01:03.108530233 +0000 UTC m=+2012.251234638" watchObservedRunningTime="2026-02-04 12:01:03.111042385 +0000 UTC m=+2012.253746760" Feb 04 12:01:04 crc kubenswrapper[4728]: I0204 12:01:04.098329 4728 generic.go:334] "Generic (PLEG): container finished" podID="04c63d3d-617e-4b87-aa1f-1093a356ca44" containerID="af5bd541f2bdf7c19aa2434f31db07654167b8e2e041e9c37d25abedcd9594e9" exitCode=0 Feb 04 12:01:04 crc kubenswrapper[4728]: I0204 12:01:04.098458 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503441-7qplb" event={"ID":"04c63d3d-617e-4b87-aa1f-1093a356ca44","Type":"ContainerDied","Data":"af5bd541f2bdf7c19aa2434f31db07654167b8e2e041e9c37d25abedcd9594e9"} Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.442013 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.448602 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.448673 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.542864 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-fernet-keys\") pod \"04c63d3d-617e-4b87-aa1f-1093a356ca44\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.542929 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-config-data\") pod \"04c63d3d-617e-4b87-aa1f-1093a356ca44\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.543023 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-combined-ca-bundle\") pod \"04c63d3d-617e-4b87-aa1f-1093a356ca44\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.543110 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtw55\" (UniqueName: \"kubernetes.io/projected/04c63d3d-617e-4b87-aa1f-1093a356ca44-kube-api-access-rtw55\") pod \"04c63d3d-617e-4b87-aa1f-1093a356ca44\" (UID: \"04c63d3d-617e-4b87-aa1f-1093a356ca44\") " Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.555540 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "04c63d3d-617e-4b87-aa1f-1093a356ca44" (UID: "04c63d3d-617e-4b87-aa1f-1093a356ca44"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.563587 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c63d3d-617e-4b87-aa1f-1093a356ca44-kube-api-access-rtw55" (OuterVolumeSpecName: "kube-api-access-rtw55") pod "04c63d3d-617e-4b87-aa1f-1093a356ca44" (UID: "04c63d3d-617e-4b87-aa1f-1093a356ca44"). InnerVolumeSpecName "kube-api-access-rtw55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.573224 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04c63d3d-617e-4b87-aa1f-1093a356ca44" (UID: "04c63d3d-617e-4b87-aa1f-1093a356ca44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.599984 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-config-data" (OuterVolumeSpecName: "config-data") pod "04c63d3d-617e-4b87-aa1f-1093a356ca44" (UID: "04c63d3d-617e-4b87-aa1f-1093a356ca44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.645605 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtw55\" (UniqueName: \"kubernetes.io/projected/04c63d3d-617e-4b87-aa1f-1093a356ca44-kube-api-access-rtw55\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.645640 4728 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.645651 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-config-data\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:05 crc kubenswrapper[4728]: I0204 12:01:05.645658 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c63d3d-617e-4b87-aa1f-1093a356ca44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:06 crc kubenswrapper[4728]: I0204 12:01:06.126113 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29503441-7qplb" event={"ID":"04c63d3d-617e-4b87-aa1f-1093a356ca44","Type":"ContainerDied","Data":"8766c3f2e211c75505021fc456b3afe9e1e743a026c0d66f1cb4b868890a5717"} Feb 04 12:01:06 crc kubenswrapper[4728]: I0204 12:01:06.126185 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29503441-7qplb" Feb 04 12:01:06 crc kubenswrapper[4728]: I0204 12:01:06.126183 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8766c3f2e211c75505021fc456b3afe9e1e743a026c0d66f1cb4b868890a5717" Feb 04 12:01:12 crc kubenswrapper[4728]: I0204 12:01:12.181309 4728 generic.go:334] "Generic (PLEG): container finished" podID="3e1c830b-d255-42d9-843e-80bee025b267" containerID="71b890790c300368e47fd9c9d0774689c535b181e55e9fee0444e099e9d27dea" exitCode=0 Feb 04 12:01:12 crc kubenswrapper[4728]: I0204 12:01:12.181394 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" event={"ID":"3e1c830b-d255-42d9-843e-80bee025b267","Type":"ContainerDied","Data":"71b890790c300368e47fd9c9d0774689c535b181e55e9fee0444e099e9d27dea"} Feb 04 12:01:13 crc kubenswrapper[4728]: I0204 12:01:13.624253 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:13 crc kubenswrapper[4728]: I0204 12:01:13.710032 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm6rz\" (UniqueName: \"kubernetes.io/projected/3e1c830b-d255-42d9-843e-80bee025b267-kube-api-access-gm6rz\") pod \"3e1c830b-d255-42d9-843e-80bee025b267\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " Feb 04 12:01:13 crc kubenswrapper[4728]: I0204 12:01:13.711018 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-ssh-key-openstack-edpm-ipam\") pod \"3e1c830b-d255-42d9-843e-80bee025b267\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " Feb 04 12:01:13 crc kubenswrapper[4728]: I0204 12:01:13.711227 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-inventory\") pod \"3e1c830b-d255-42d9-843e-80bee025b267\" (UID: \"3e1c830b-d255-42d9-843e-80bee025b267\") " Feb 04 12:01:13 crc kubenswrapper[4728]: I0204 12:01:13.716419 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1c830b-d255-42d9-843e-80bee025b267-kube-api-access-gm6rz" (OuterVolumeSpecName: "kube-api-access-gm6rz") pod "3e1c830b-d255-42d9-843e-80bee025b267" (UID: "3e1c830b-d255-42d9-843e-80bee025b267"). InnerVolumeSpecName "kube-api-access-gm6rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:01:13 crc kubenswrapper[4728]: I0204 12:01:13.736507 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-inventory" (OuterVolumeSpecName: "inventory") pod "3e1c830b-d255-42d9-843e-80bee025b267" (UID: "3e1c830b-d255-42d9-843e-80bee025b267"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:13 crc kubenswrapper[4728]: I0204 12:01:13.739932 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3e1c830b-d255-42d9-843e-80bee025b267" (UID: "3e1c830b-d255-42d9-843e-80bee025b267"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:13 crc kubenswrapper[4728]: I0204 12:01:13.813799 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm6rz\" (UniqueName: \"kubernetes.io/projected/3e1c830b-d255-42d9-843e-80bee025b267-kube-api-access-gm6rz\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:13 crc kubenswrapper[4728]: I0204 12:01:13.813830 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:13 crc kubenswrapper[4728]: I0204 12:01:13.813840 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e1c830b-d255-42d9-843e-80bee025b267-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.207360 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" event={"ID":"3e1c830b-d255-42d9-843e-80bee025b267","Type":"ContainerDied","Data":"048784660f62ef407c6c177858f31bd2d7b7a94c8b057acd5f5a513dfd06fee4"} Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.207738 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="048784660f62ef407c6c177858f31bd2d7b7a94c8b057acd5f5a513dfd06fee4" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.207428 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.320842 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz"] Feb 04 12:01:14 crc kubenswrapper[4728]: E0204 12:01:14.321490 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1c830b-d255-42d9-843e-80bee025b267" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.321531 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1c830b-d255-42d9-843e-80bee025b267" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 04 12:01:14 crc kubenswrapper[4728]: E0204 12:01:14.321667 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c63d3d-617e-4b87-aa1f-1093a356ca44" containerName="keystone-cron" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.321688 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c63d3d-617e-4b87-aa1f-1093a356ca44" containerName="keystone-cron" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.322184 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c63d3d-617e-4b87-aa1f-1093a356ca44" containerName="keystone-cron" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.322233 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1c830b-d255-42d9-843e-80bee025b267" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.323580 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.327251 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.329206 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.329535 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.329894 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.330086 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.330270 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.330637 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.333511 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.336328 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz"] Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.426730 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.426870 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.426931 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.426969 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.426993 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.427067 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.427110 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.427144 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.427200 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqbs7\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-kube-api-access-wqbs7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.427284 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.427340 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.427490 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.427520 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.427668 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.529499 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.529573 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.529607 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.529636 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.529657 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.529706 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.529738 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.529992 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.530049 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqbs7\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-kube-api-access-wqbs7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.530123 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.530232 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.530831 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.530860 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.530969 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.533914 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.534517 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.534655 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.535082 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.535162 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.535396 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.536049 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.536602 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.537867 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.538168 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.538824 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.542311 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.542963 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.545906 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqbs7\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-kube-api-access-wqbs7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:14 crc kubenswrapper[4728]: I0204 12:01:14.661350 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:15 crc kubenswrapper[4728]: I0204 12:01:15.245466 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz"] Feb 04 12:01:16 crc kubenswrapper[4728]: I0204 12:01:16.253719 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" event={"ID":"a127564d-8974-4dff-9963-d143e45e07f9","Type":"ContainerStarted","Data":"eff7522b2df44a6f54766bf0a927b9a153b0384b18c24c99f2011e26427f353d"} Feb 04 12:01:16 crc kubenswrapper[4728]: I0204 12:01:16.254120 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" event={"ID":"a127564d-8974-4dff-9963-d143e45e07f9","Type":"ContainerStarted","Data":"84ff0937ee1f2e72d6029e06aee1be3f62f60ea34cb197d904397ef7a07ffb72"} Feb 04 12:01:16 crc kubenswrapper[4728]: I0204 12:01:16.289746 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" podStartSLOduration=1.8433532289999999 podStartE2EDuration="2.289725757s" podCreationTimestamp="2026-02-04 12:01:14 +0000 UTC" firstStartedPulling="2026-02-04 12:01:15.239588638 +0000 UTC m=+2024.382293023" lastFinishedPulling="2026-02-04 12:01:15.685961166 +0000 UTC m=+2024.828665551" observedRunningTime="2026-02-04 12:01:16.289229264 +0000 UTC m=+2025.431933659" watchObservedRunningTime="2026-02-04 12:01:16.289725757 +0000 UTC m=+2025.432430162" Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.335500 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xqnxw"] Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.338816 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.403790 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xqnxw"] Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.517796 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6wj5\" (UniqueName: \"kubernetes.io/projected/75515beb-6a35-47fb-a031-0554cbcd3e99-kube-api-access-k6wj5\") pod \"certified-operators-xqnxw\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.517929 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-catalog-content\") pod \"certified-operators-xqnxw\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.517995 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-utilities\") pod \"certified-operators-xqnxw\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.619788 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-catalog-content\") pod \"certified-operators-xqnxw\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.619863 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-utilities\") pod \"certified-operators-xqnxw\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.619933 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6wj5\" (UniqueName: \"kubernetes.io/projected/75515beb-6a35-47fb-a031-0554cbcd3e99-kube-api-access-k6wj5\") pod \"certified-operators-xqnxw\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.620497 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-catalog-content\") pod \"certified-operators-xqnxw\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.620549 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-utilities\") pod \"certified-operators-xqnxw\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.663067 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6wj5\" (UniqueName: \"kubernetes.io/projected/75515beb-6a35-47fb-a031-0554cbcd3e99-kube-api-access-k6wj5\") pod \"certified-operators-xqnxw\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:23 crc kubenswrapper[4728]: I0204 12:01:23.705726 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:24 crc kubenswrapper[4728]: I0204 12:01:24.182807 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xqnxw"] Feb 04 12:01:24 crc kubenswrapper[4728]: I0204 12:01:24.351499 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqnxw" event={"ID":"75515beb-6a35-47fb-a031-0554cbcd3e99","Type":"ContainerStarted","Data":"f42ef168c8c054c25432521b6dca326a2fcf5ddaf30aa9fa9c2295c5c4330639"} Feb 04 12:01:25 crc kubenswrapper[4728]: I0204 12:01:25.367935 4728 generic.go:334] "Generic (PLEG): container finished" podID="75515beb-6a35-47fb-a031-0554cbcd3e99" containerID="6de2c0125994f7b6ad1d87bdeec39c6a9a6200d9e2784f49dfada61c33392dc0" exitCode=0 Feb 04 12:01:25 crc kubenswrapper[4728]: I0204 12:01:25.367985 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqnxw" event={"ID":"75515beb-6a35-47fb-a031-0554cbcd3e99","Type":"ContainerDied","Data":"6de2c0125994f7b6ad1d87bdeec39c6a9a6200d9e2784f49dfada61c33392dc0"} Feb 04 12:01:26 crc kubenswrapper[4728]: I0204 12:01:26.381494 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqnxw" event={"ID":"75515beb-6a35-47fb-a031-0554cbcd3e99","Type":"ContainerStarted","Data":"12bf50413c7700079fa1be6790eedf179e985279f6238c39b8b41d84debe9e3a"} Feb 04 12:01:28 crc kubenswrapper[4728]: I0204 12:01:28.403034 4728 generic.go:334] "Generic (PLEG): container finished" podID="75515beb-6a35-47fb-a031-0554cbcd3e99" containerID="12bf50413c7700079fa1be6790eedf179e985279f6238c39b8b41d84debe9e3a" exitCode=0 Feb 04 12:01:28 crc kubenswrapper[4728]: I0204 12:01:28.403119 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqnxw" event={"ID":"75515beb-6a35-47fb-a031-0554cbcd3e99","Type":"ContainerDied","Data":"12bf50413c7700079fa1be6790eedf179e985279f6238c39b8b41d84debe9e3a"} Feb 04 12:01:30 crc kubenswrapper[4728]: I0204 12:01:30.424149 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqnxw" event={"ID":"75515beb-6a35-47fb-a031-0554cbcd3e99","Type":"ContainerStarted","Data":"2b4ff4f09328897b04a5f5fa50234910e402133f81064cbd8d2e49f4f8ac5607"} Feb 04 12:01:30 crc kubenswrapper[4728]: I0204 12:01:30.452833 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xqnxw" podStartSLOduration=3.385022153 podStartE2EDuration="7.452814915s" podCreationTimestamp="2026-02-04 12:01:23 +0000 UTC" firstStartedPulling="2026-02-04 12:01:25.369642606 +0000 UTC m=+2034.512347001" lastFinishedPulling="2026-02-04 12:01:29.437435338 +0000 UTC m=+2038.580139763" observedRunningTime="2026-02-04 12:01:30.447540326 +0000 UTC m=+2039.590244731" watchObservedRunningTime="2026-02-04 12:01:30.452814915 +0000 UTC m=+2039.595519290" Feb 04 12:01:33 crc kubenswrapper[4728]: I0204 12:01:33.707105 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:33 crc kubenswrapper[4728]: I0204 12:01:33.707580 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:33 crc kubenswrapper[4728]: I0204 12:01:33.748308 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:34 crc kubenswrapper[4728]: I0204 12:01:34.519898 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:34 crc kubenswrapper[4728]: I0204 12:01:34.593579 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xqnxw"] Feb 04 12:01:35 crc kubenswrapper[4728]: I0204 12:01:35.448346 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:01:35 crc kubenswrapper[4728]: I0204 12:01:35.448740 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:01:35 crc kubenswrapper[4728]: I0204 12:01:35.448835 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 12:01:35 crc kubenswrapper[4728]: I0204 12:01:35.450165 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"529af0f7f966a4ea0b6e4a1f05c7ef144a460c0249245b7c950d3e46bc1f0c22"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 12:01:35 crc kubenswrapper[4728]: I0204 12:01:35.450279 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://529af0f7f966a4ea0b6e4a1f05c7ef144a460c0249245b7c950d3e46bc1f0c22" gracePeriod=600 Feb 04 12:01:36 crc kubenswrapper[4728]: I0204 12:01:36.486807 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="529af0f7f966a4ea0b6e4a1f05c7ef144a460c0249245b7c950d3e46bc1f0c22" exitCode=0 Feb 04 12:01:36 crc kubenswrapper[4728]: I0204 12:01:36.486877 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"529af0f7f966a4ea0b6e4a1f05c7ef144a460c0249245b7c950d3e46bc1f0c22"} Feb 04 12:01:36 crc kubenswrapper[4728]: I0204 12:01:36.487253 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860"} Feb 04 12:01:36 crc kubenswrapper[4728]: I0204 12:01:36.487292 4728 scope.go:117] "RemoveContainer" containerID="cbdc96124c1b47f1e73d3676e7cb7e9f6317bda5d41a317817db1eb4a39a2994" Feb 04 12:01:36 crc kubenswrapper[4728]: I0204 12:01:36.487494 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xqnxw" podUID="75515beb-6a35-47fb-a031-0554cbcd3e99" containerName="registry-server" containerID="cri-o://2b4ff4f09328897b04a5f5fa50234910e402133f81064cbd8d2e49f4f8ac5607" gracePeriod=2 Feb 04 12:01:37 crc kubenswrapper[4728]: I0204 12:01:37.501477 4728 generic.go:334] "Generic (PLEG): container finished" podID="75515beb-6a35-47fb-a031-0554cbcd3e99" containerID="2b4ff4f09328897b04a5f5fa50234910e402133f81064cbd8d2e49f4f8ac5607" exitCode=0 Feb 04 12:01:37 crc kubenswrapper[4728]: I0204 12:01:37.501734 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqnxw" event={"ID":"75515beb-6a35-47fb-a031-0554cbcd3e99","Type":"ContainerDied","Data":"2b4ff4f09328897b04a5f5fa50234910e402133f81064cbd8d2e49f4f8ac5607"} Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.087162 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.147037 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-utilities\") pod \"75515beb-6a35-47fb-a031-0554cbcd3e99\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.147154 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6wj5\" (UniqueName: \"kubernetes.io/projected/75515beb-6a35-47fb-a031-0554cbcd3e99-kube-api-access-k6wj5\") pod \"75515beb-6a35-47fb-a031-0554cbcd3e99\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.147208 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-catalog-content\") pod \"75515beb-6a35-47fb-a031-0554cbcd3e99\" (UID: \"75515beb-6a35-47fb-a031-0554cbcd3e99\") " Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.148447 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-utilities" (OuterVolumeSpecName: "utilities") pod "75515beb-6a35-47fb-a031-0554cbcd3e99" (UID: "75515beb-6a35-47fb-a031-0554cbcd3e99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.154366 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75515beb-6a35-47fb-a031-0554cbcd3e99-kube-api-access-k6wj5" (OuterVolumeSpecName: "kube-api-access-k6wj5") pod "75515beb-6a35-47fb-a031-0554cbcd3e99" (UID: "75515beb-6a35-47fb-a031-0554cbcd3e99"). InnerVolumeSpecName "kube-api-access-k6wj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.193086 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75515beb-6a35-47fb-a031-0554cbcd3e99" (UID: "75515beb-6a35-47fb-a031-0554cbcd3e99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.249700 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.249734 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6wj5\" (UniqueName: \"kubernetes.io/projected/75515beb-6a35-47fb-a031-0554cbcd3e99-kube-api-access-k6wj5\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.249743 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75515beb-6a35-47fb-a031-0554cbcd3e99-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.513785 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqnxw" event={"ID":"75515beb-6a35-47fb-a031-0554cbcd3e99","Type":"ContainerDied","Data":"f42ef168c8c054c25432521b6dca326a2fcf5ddaf30aa9fa9c2295c5c4330639"} Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.513826 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqnxw" Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.513871 4728 scope.go:117] "RemoveContainer" containerID="2b4ff4f09328897b04a5f5fa50234910e402133f81064cbd8d2e49f4f8ac5607" Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.544463 4728 scope.go:117] "RemoveContainer" containerID="12bf50413c7700079fa1be6790eedf179e985279f6238c39b8b41d84debe9e3a" Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.563898 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xqnxw"] Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.571788 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xqnxw"] Feb 04 12:01:38 crc kubenswrapper[4728]: I0204 12:01:38.590951 4728 scope.go:117] "RemoveContainer" containerID="6de2c0125994f7b6ad1d87bdeec39c6a9a6200d9e2784f49dfada61c33392dc0" Feb 04 12:01:39 crc kubenswrapper[4728]: I0204 12:01:39.565638 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75515beb-6a35-47fb-a031-0554cbcd3e99" path="/var/lib/kubelet/pods/75515beb-6a35-47fb-a031-0554cbcd3e99/volumes" Feb 04 12:01:50 crc kubenswrapper[4728]: I0204 12:01:50.623566 4728 generic.go:334] "Generic (PLEG): container finished" podID="a127564d-8974-4dff-9963-d143e45e07f9" containerID="eff7522b2df44a6f54766bf0a927b9a153b0384b18c24c99f2011e26427f353d" exitCode=0 Feb 04 12:01:50 crc kubenswrapper[4728]: I0204 12:01:50.623685 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" event={"ID":"a127564d-8974-4dff-9963-d143e45e07f9","Type":"ContainerDied","Data":"eff7522b2df44a6f54766bf0a927b9a153b0384b18c24c99f2011e26427f353d"} Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.168866 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.248072 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.248525 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-neutron-metadata-combined-ca-bundle\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.248656 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.248804 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-repo-setup-combined-ca-bundle\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.248926 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-inventory\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.249091 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ssh-key-openstack-edpm-ipam\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.249268 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-nova-combined-ca-bundle\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.249422 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ovn-combined-ca-bundle\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.249590 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-bootstrap-combined-ca-bundle\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.249727 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-telemetry-combined-ca-bundle\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.249952 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-libvirt-combined-ca-bundle\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.250106 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqbs7\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-kube-api-access-wqbs7\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.250309 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.250460 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"a127564d-8974-4dff-9963-d143e45e07f9\" (UID: \"a127564d-8974-4dff-9963-d143e45e07f9\") " Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.255371 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.256579 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.257306 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.258777 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-kube-api-access-wqbs7" (OuterVolumeSpecName: "kube-api-access-wqbs7") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "kube-api-access-wqbs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.259260 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.260300 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.261267 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.261652 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.261746 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.262435 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.268955 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.270810 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.282641 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-inventory" (OuterVolumeSpecName: "inventory") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.286558 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a127564d-8974-4dff-9963-d143e45e07f9" (UID: "a127564d-8974-4dff-9963-d143e45e07f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354040 4728 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354089 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqbs7\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-kube-api-access-wqbs7\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354106 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354120 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354135 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354148 4728 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354161 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a127564d-8974-4dff-9963-d143e45e07f9-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354174 4728 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354188 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354199 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354212 4728 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354224 4728 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354235 4728 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.354248 4728 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a127564d-8974-4dff-9963-d143e45e07f9-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.643036 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" event={"ID":"a127564d-8974-4dff-9963-d143e45e07f9","Type":"ContainerDied","Data":"84ff0937ee1f2e72d6029e06aee1be3f62f60ea34cb197d904397ef7a07ffb72"} Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.643075 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84ff0937ee1f2e72d6029e06aee1be3f62f60ea34cb197d904397ef7a07ffb72" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.643086 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.743023 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667"] Feb 04 12:01:52 crc kubenswrapper[4728]: E0204 12:01:52.743585 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75515beb-6a35-47fb-a031-0554cbcd3e99" containerName="registry-server" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.743648 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="75515beb-6a35-47fb-a031-0554cbcd3e99" containerName="registry-server" Feb 04 12:01:52 crc kubenswrapper[4728]: E0204 12:01:52.743732 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75515beb-6a35-47fb-a031-0554cbcd3e99" containerName="extract-content" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.743816 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="75515beb-6a35-47fb-a031-0554cbcd3e99" containerName="extract-content" Feb 04 12:01:52 crc kubenswrapper[4728]: E0204 12:01:52.743895 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a127564d-8974-4dff-9963-d143e45e07f9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.743950 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a127564d-8974-4dff-9963-d143e45e07f9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 04 12:01:52 crc kubenswrapper[4728]: E0204 12:01:52.744024 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75515beb-6a35-47fb-a031-0554cbcd3e99" containerName="extract-utilities" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.744076 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="75515beb-6a35-47fb-a031-0554cbcd3e99" containerName="extract-utilities" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.744281 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="75515beb-6a35-47fb-a031-0554cbcd3e99" containerName="registry-server" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.744351 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a127564d-8974-4dff-9963-d143e45e07f9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.744999 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.748166 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.748283 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.748556 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.748806 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.749320 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.756026 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667"] Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.871690 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.871807 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.871835 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.871860 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.871901 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddnnr\" (UniqueName: \"kubernetes.io/projected/7dc36402-dfbd-4f2b-a604-b24331482d0e-kube-api-access-ddnnr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.973334 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddnnr\" (UniqueName: \"kubernetes.io/projected/7dc36402-dfbd-4f2b-a604-b24331482d0e-kube-api-access-ddnnr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.973454 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.973515 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.973544 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.973574 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.974510 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.977511 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.977536 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.977590 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:52 crc kubenswrapper[4728]: I0204 12:01:52.999011 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddnnr\" (UniqueName: \"kubernetes.io/projected/7dc36402-dfbd-4f2b-a604-b24331482d0e-kube-api-access-ddnnr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cf667\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:53 crc kubenswrapper[4728]: I0204 12:01:53.063059 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:01:53 crc kubenswrapper[4728]: I0204 12:01:53.614685 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667"] Feb 04 12:01:53 crc kubenswrapper[4728]: I0204 12:01:53.659188 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" event={"ID":"7dc36402-dfbd-4f2b-a604-b24331482d0e","Type":"ContainerStarted","Data":"4783ddf9515fea08f5c914cc868802283dd292ac5f3632a4e87a56aac7b364ca"} Feb 04 12:01:54 crc kubenswrapper[4728]: I0204 12:01:54.669998 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" event={"ID":"7dc36402-dfbd-4f2b-a604-b24331482d0e","Type":"ContainerStarted","Data":"bfbe0ceffa459cd5776cc4dba0dc641d30f861953bc1a54fc5e263afac865246"} Feb 04 12:01:54 crc kubenswrapper[4728]: I0204 12:01:54.694152 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" podStartSLOduration=2.262632578 podStartE2EDuration="2.694127773s" podCreationTimestamp="2026-02-04 12:01:52 +0000 UTC" firstStartedPulling="2026-02-04 12:01:53.618362323 +0000 UTC m=+2062.761066708" lastFinishedPulling="2026-02-04 12:01:54.049857518 +0000 UTC m=+2063.192561903" observedRunningTime="2026-02-04 12:01:54.687470539 +0000 UTC m=+2063.830174944" watchObservedRunningTime="2026-02-04 12:01:54.694127773 +0000 UTC m=+2063.836832158" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.007317 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7ppdr"] Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.009702 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.021042 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ppdr"] Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.050790 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z625\" (UniqueName: \"kubernetes.io/projected/1b31761a-9098-429e-a53d-77dd8d033202-kube-api-access-2z625\") pod \"redhat-marketplace-7ppdr\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.051019 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-utilities\") pod \"redhat-marketplace-7ppdr\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.051061 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-catalog-content\") pod \"redhat-marketplace-7ppdr\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.152729 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-utilities\") pod \"redhat-marketplace-7ppdr\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.152831 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-catalog-content\") pod \"redhat-marketplace-7ppdr\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.152925 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z625\" (UniqueName: \"kubernetes.io/projected/1b31761a-9098-429e-a53d-77dd8d033202-kube-api-access-2z625\") pod \"redhat-marketplace-7ppdr\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.153288 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-utilities\") pod \"redhat-marketplace-7ppdr\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.153465 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-catalog-content\") pod \"redhat-marketplace-7ppdr\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.184344 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z625\" (UniqueName: \"kubernetes.io/projected/1b31761a-9098-429e-a53d-77dd8d033202-kube-api-access-2z625\") pod \"redhat-marketplace-7ppdr\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.350784 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:02 crc kubenswrapper[4728]: I0204 12:02:02.878911 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ppdr"] Feb 04 12:02:03 crc kubenswrapper[4728]: I0204 12:02:03.745473 4728 generic.go:334] "Generic (PLEG): container finished" podID="1b31761a-9098-429e-a53d-77dd8d033202" containerID="da634106251e9abc7ab3447a39fb96ea988b44b305f2e4ef6fb6cf0dcc47f8f7" exitCode=0 Feb 04 12:02:03 crc kubenswrapper[4728]: I0204 12:02:03.745579 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ppdr" event={"ID":"1b31761a-9098-429e-a53d-77dd8d033202","Type":"ContainerDied","Data":"da634106251e9abc7ab3447a39fb96ea988b44b305f2e4ef6fb6cf0dcc47f8f7"} Feb 04 12:02:03 crc kubenswrapper[4728]: I0204 12:02:03.745901 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ppdr" event={"ID":"1b31761a-9098-429e-a53d-77dd8d033202","Type":"ContainerStarted","Data":"6cea157755f9571097aac9f299e07bd069b3d8046a0df56a0648fe1e0f2b1f9f"} Feb 04 12:02:04 crc kubenswrapper[4728]: I0204 12:02:04.761907 4728 generic.go:334] "Generic (PLEG): container finished" podID="1b31761a-9098-429e-a53d-77dd8d033202" containerID="06c2b28b9ee6b52faea5db9633a06b9031af989bc13414d07fc02f5f755334b6" exitCode=0 Feb 04 12:02:04 crc kubenswrapper[4728]: I0204 12:02:04.762135 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ppdr" event={"ID":"1b31761a-9098-429e-a53d-77dd8d033202","Type":"ContainerDied","Data":"06c2b28b9ee6b52faea5db9633a06b9031af989bc13414d07fc02f5f755334b6"} Feb 04 12:02:05 crc kubenswrapper[4728]: I0204 12:02:05.777222 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ppdr" event={"ID":"1b31761a-9098-429e-a53d-77dd8d033202","Type":"ContainerStarted","Data":"ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466"} Feb 04 12:02:05 crc kubenswrapper[4728]: I0204 12:02:05.797472 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7ppdr" podStartSLOduration=3.339327158 podStartE2EDuration="4.797450945s" podCreationTimestamp="2026-02-04 12:02:01 +0000 UTC" firstStartedPulling="2026-02-04 12:02:03.748851763 +0000 UTC m=+2072.891556158" lastFinishedPulling="2026-02-04 12:02:05.20697556 +0000 UTC m=+2074.349679945" observedRunningTime="2026-02-04 12:02:05.794997565 +0000 UTC m=+2074.937701960" watchObservedRunningTime="2026-02-04 12:02:05.797450945 +0000 UTC m=+2074.940155330" Feb 04 12:02:12 crc kubenswrapper[4728]: I0204 12:02:12.351161 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:12 crc kubenswrapper[4728]: I0204 12:02:12.352514 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:12 crc kubenswrapper[4728]: I0204 12:02:12.412681 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:12 crc kubenswrapper[4728]: I0204 12:02:12.892351 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:12 crc kubenswrapper[4728]: I0204 12:02:12.941494 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ppdr"] Feb 04 12:02:14 crc kubenswrapper[4728]: I0204 12:02:14.860470 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7ppdr" podUID="1b31761a-9098-429e-a53d-77dd8d033202" containerName="registry-server" containerID="cri-o://ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466" gracePeriod=2 Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.332121 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.407820 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-catalog-content\") pod \"1b31761a-9098-429e-a53d-77dd8d033202\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.407889 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-utilities\") pod \"1b31761a-9098-429e-a53d-77dd8d033202\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.407925 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z625\" (UniqueName: \"kubernetes.io/projected/1b31761a-9098-429e-a53d-77dd8d033202-kube-api-access-2z625\") pod \"1b31761a-9098-429e-a53d-77dd8d033202\" (UID: \"1b31761a-9098-429e-a53d-77dd8d033202\") " Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.408985 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-utilities" (OuterVolumeSpecName: "utilities") pod "1b31761a-9098-429e-a53d-77dd8d033202" (UID: "1b31761a-9098-429e-a53d-77dd8d033202"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.425073 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b31761a-9098-429e-a53d-77dd8d033202-kube-api-access-2z625" (OuterVolumeSpecName: "kube-api-access-2z625") pod "1b31761a-9098-429e-a53d-77dd8d033202" (UID: "1b31761a-9098-429e-a53d-77dd8d033202"). InnerVolumeSpecName "kube-api-access-2z625". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.449046 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b31761a-9098-429e-a53d-77dd8d033202" (UID: "1b31761a-9098-429e-a53d-77dd8d033202"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.509972 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.510011 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b31761a-9098-429e-a53d-77dd8d033202-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.510026 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z625\" (UniqueName: \"kubernetes.io/projected/1b31761a-9098-429e-a53d-77dd8d033202-kube-api-access-2z625\") on node \"crc\" DevicePath \"\"" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.870217 4728 generic.go:334] "Generic (PLEG): container finished" podID="1b31761a-9098-429e-a53d-77dd8d033202" containerID="ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466" exitCode=0 Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.870266 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ppdr" event={"ID":"1b31761a-9098-429e-a53d-77dd8d033202","Type":"ContainerDied","Data":"ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466"} Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.870295 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ppdr" event={"ID":"1b31761a-9098-429e-a53d-77dd8d033202","Type":"ContainerDied","Data":"6cea157755f9571097aac9f299e07bd069b3d8046a0df56a0648fe1e0f2b1f9f"} Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.870313 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ppdr" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.870329 4728 scope.go:117] "RemoveContainer" containerID="ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.896692 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ppdr"] Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.907899 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ppdr"] Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.909878 4728 scope.go:117] "RemoveContainer" containerID="06c2b28b9ee6b52faea5db9633a06b9031af989bc13414d07fc02f5f755334b6" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.938850 4728 scope.go:117] "RemoveContainer" containerID="da634106251e9abc7ab3447a39fb96ea988b44b305f2e4ef6fb6cf0dcc47f8f7" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.982862 4728 scope.go:117] "RemoveContainer" containerID="ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466" Feb 04 12:02:15 crc kubenswrapper[4728]: E0204 12:02:15.983443 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466\": container with ID starting with ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466 not found: ID does not exist" containerID="ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.983476 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466"} err="failed to get container status \"ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466\": rpc error: code = NotFound desc = could not find container \"ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466\": container with ID starting with ec0c76fe212bc20febe5acd551ec0dfd25038029e85e4a492b67153b297e5466 not found: ID does not exist" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.983499 4728 scope.go:117] "RemoveContainer" containerID="06c2b28b9ee6b52faea5db9633a06b9031af989bc13414d07fc02f5f755334b6" Feb 04 12:02:15 crc kubenswrapper[4728]: E0204 12:02:15.983836 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c2b28b9ee6b52faea5db9633a06b9031af989bc13414d07fc02f5f755334b6\": container with ID starting with 06c2b28b9ee6b52faea5db9633a06b9031af989bc13414d07fc02f5f755334b6 not found: ID does not exist" containerID="06c2b28b9ee6b52faea5db9633a06b9031af989bc13414d07fc02f5f755334b6" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.983893 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c2b28b9ee6b52faea5db9633a06b9031af989bc13414d07fc02f5f755334b6"} err="failed to get container status \"06c2b28b9ee6b52faea5db9633a06b9031af989bc13414d07fc02f5f755334b6\": rpc error: code = NotFound desc = could not find container \"06c2b28b9ee6b52faea5db9633a06b9031af989bc13414d07fc02f5f755334b6\": container with ID starting with 06c2b28b9ee6b52faea5db9633a06b9031af989bc13414d07fc02f5f755334b6 not found: ID does not exist" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.983930 4728 scope.go:117] "RemoveContainer" containerID="da634106251e9abc7ab3447a39fb96ea988b44b305f2e4ef6fb6cf0dcc47f8f7" Feb 04 12:02:15 crc kubenswrapper[4728]: E0204 12:02:15.984400 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da634106251e9abc7ab3447a39fb96ea988b44b305f2e4ef6fb6cf0dcc47f8f7\": container with ID starting with da634106251e9abc7ab3447a39fb96ea988b44b305f2e4ef6fb6cf0dcc47f8f7 not found: ID does not exist" containerID="da634106251e9abc7ab3447a39fb96ea988b44b305f2e4ef6fb6cf0dcc47f8f7" Feb 04 12:02:15 crc kubenswrapper[4728]: I0204 12:02:15.984436 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da634106251e9abc7ab3447a39fb96ea988b44b305f2e4ef6fb6cf0dcc47f8f7"} err="failed to get container status \"da634106251e9abc7ab3447a39fb96ea988b44b305f2e4ef6fb6cf0dcc47f8f7\": rpc error: code = NotFound desc = could not find container \"da634106251e9abc7ab3447a39fb96ea988b44b305f2e4ef6fb6cf0dcc47f8f7\": container with ID starting with da634106251e9abc7ab3447a39fb96ea988b44b305f2e4ef6fb6cf0dcc47f8f7 not found: ID does not exist" Feb 04 12:02:17 crc kubenswrapper[4728]: I0204 12:02:17.566281 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b31761a-9098-429e-a53d-77dd8d033202" path="/var/lib/kubelet/pods/1b31761a-9098-429e-a53d-77dd8d033202/volumes" Feb 04 12:02:51 crc kubenswrapper[4728]: I0204 12:02:51.182571 4728 generic.go:334] "Generic (PLEG): container finished" podID="7dc36402-dfbd-4f2b-a604-b24331482d0e" containerID="bfbe0ceffa459cd5776cc4dba0dc641d30f861953bc1a54fc5e263afac865246" exitCode=0 Feb 04 12:02:51 crc kubenswrapper[4728]: I0204 12:02:51.182718 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" event={"ID":"7dc36402-dfbd-4f2b-a604-b24331482d0e","Type":"ContainerDied","Data":"bfbe0ceffa459cd5776cc4dba0dc641d30f861953bc1a54fc5e263afac865246"} Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.597635 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.620978 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddnnr\" (UniqueName: \"kubernetes.io/projected/7dc36402-dfbd-4f2b-a604-b24331482d0e-kube-api-access-ddnnr\") pod \"7dc36402-dfbd-4f2b-a604-b24331482d0e\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.621050 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ssh-key-openstack-edpm-ipam\") pod \"7dc36402-dfbd-4f2b-a604-b24331482d0e\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.645526 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc36402-dfbd-4f2b-a604-b24331482d0e-kube-api-access-ddnnr" (OuterVolumeSpecName: "kube-api-access-ddnnr") pod "7dc36402-dfbd-4f2b-a604-b24331482d0e" (UID: "7dc36402-dfbd-4f2b-a604-b24331482d0e"). InnerVolumeSpecName "kube-api-access-ddnnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.682517 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7dc36402-dfbd-4f2b-a604-b24331482d0e" (UID: "7dc36402-dfbd-4f2b-a604-b24331482d0e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.747301 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-inventory\") pod \"7dc36402-dfbd-4f2b-a604-b24331482d0e\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.747402 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovn-combined-ca-bundle\") pod \"7dc36402-dfbd-4f2b-a604-b24331482d0e\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.747445 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovncontroller-config-0\") pod \"7dc36402-dfbd-4f2b-a604-b24331482d0e\" (UID: \"7dc36402-dfbd-4f2b-a604-b24331482d0e\") " Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.748068 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddnnr\" (UniqueName: \"kubernetes.io/projected/7dc36402-dfbd-4f2b-a604-b24331482d0e-kube-api-access-ddnnr\") on node \"crc\" DevicePath \"\"" Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.748084 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.750013 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7dc36402-dfbd-4f2b-a604-b24331482d0e" (UID: "7dc36402-dfbd-4f2b-a604-b24331482d0e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.770479 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7dc36402-dfbd-4f2b-a604-b24331482d0e" (UID: "7dc36402-dfbd-4f2b-a604-b24331482d0e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.781825 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-inventory" (OuterVolumeSpecName: "inventory") pod "7dc36402-dfbd-4f2b-a604-b24331482d0e" (UID: "7dc36402-dfbd-4f2b-a604-b24331482d0e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.850121 4728 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.850151 4728 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7dc36402-dfbd-4f2b-a604-b24331482d0e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:02:52 crc kubenswrapper[4728]: I0204 12:02:52.850160 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dc36402-dfbd-4f2b-a604-b24331482d0e-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.199286 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" event={"ID":"7dc36402-dfbd-4f2b-a604-b24331482d0e","Type":"ContainerDied","Data":"4783ddf9515fea08f5c914cc868802283dd292ac5f3632a4e87a56aac7b364ca"} Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.199330 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4783ddf9515fea08f5c914cc868802283dd292ac5f3632a4e87a56aac7b364ca" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.199345 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cf667" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.298720 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz"] Feb 04 12:02:53 crc kubenswrapper[4728]: E0204 12:02:53.299314 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b31761a-9098-429e-a53d-77dd8d033202" containerName="extract-content" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.299330 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b31761a-9098-429e-a53d-77dd8d033202" containerName="extract-content" Feb 04 12:02:53 crc kubenswrapper[4728]: E0204 12:02:53.299339 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b31761a-9098-429e-a53d-77dd8d033202" containerName="registry-server" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.299345 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b31761a-9098-429e-a53d-77dd8d033202" containerName="registry-server" Feb 04 12:02:53 crc kubenswrapper[4728]: E0204 12:02:53.299368 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b31761a-9098-429e-a53d-77dd8d033202" containerName="extract-utilities" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.299375 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b31761a-9098-429e-a53d-77dd8d033202" containerName="extract-utilities" Feb 04 12:02:53 crc kubenswrapper[4728]: E0204 12:02:53.299386 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc36402-dfbd-4f2b-a604-b24331482d0e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.299393 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc36402-dfbd-4f2b-a604-b24331482d0e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.299551 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc36402-dfbd-4f2b-a604-b24331482d0e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.299559 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b31761a-9098-429e-a53d-77dd8d033202" containerName="registry-server" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.300144 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.310713 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.310934 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.316127 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.316184 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.316745 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.316987 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.330541 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz"] Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.359746 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.360251 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.360345 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x55wb\" (UniqueName: \"kubernetes.io/projected/aee1c19a-e4cd-4457-a13a-434722c5d516-kube-api-access-x55wb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.360464 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.360532 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.360969 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.463244 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.463313 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x55wb\" (UniqueName: \"kubernetes.io/projected/aee1c19a-e4cd-4457-a13a-434722c5d516-kube-api-access-x55wb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.463365 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.463399 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.463545 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.463600 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.467185 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.467268 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.467538 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.468413 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.469393 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.481187 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x55wb\" (UniqueName: \"kubernetes.io/projected/aee1c19a-e4cd-4457-a13a-434722c5d516-kube-api-access-x55wb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:53 crc kubenswrapper[4728]: I0204 12:02:53.624054 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:02:54 crc kubenswrapper[4728]: I0204 12:02:54.212170 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz"] Feb 04 12:02:55 crc kubenswrapper[4728]: I0204 12:02:55.226409 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" event={"ID":"aee1c19a-e4cd-4457-a13a-434722c5d516","Type":"ContainerStarted","Data":"ba5628ff55de159146d99958bc3b40b38d9d88518e4ae673999ad2f8bb48c5e0"} Feb 04 12:02:55 crc kubenswrapper[4728]: I0204 12:02:55.226867 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" event={"ID":"aee1c19a-e4cd-4457-a13a-434722c5d516","Type":"ContainerStarted","Data":"994b2e1362f943f2b589111c6ece9860cd2cd7be8c8ed5261623b405a5cc096e"} Feb 04 12:02:55 crc kubenswrapper[4728]: I0204 12:02:55.251886 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" podStartSLOduration=1.708217961 podStartE2EDuration="2.251864226s" podCreationTimestamp="2026-02-04 12:02:53 +0000 UTC" firstStartedPulling="2026-02-04 12:02:54.212162912 +0000 UTC m=+2123.354867297" lastFinishedPulling="2026-02-04 12:02:54.755809177 +0000 UTC m=+2123.898513562" observedRunningTime="2026-02-04 12:02:55.243598423 +0000 UTC m=+2124.386302808" watchObservedRunningTime="2026-02-04 12:02:55.251864226 +0000 UTC m=+2124.394568601" Feb 04 12:03:35 crc kubenswrapper[4728]: I0204 12:03:35.448050 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:03:35 crc kubenswrapper[4728]: I0204 12:03:35.448609 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.662869 4728 generic.go:334] "Generic (PLEG): container finished" podID="aee1c19a-e4cd-4457-a13a-434722c5d516" containerID="ba5628ff55de159146d99958bc3b40b38d9d88518e4ae673999ad2f8bb48c5e0" exitCode=0 Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.663285 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" event={"ID":"aee1c19a-e4cd-4457-a13a-434722c5d516","Type":"ContainerDied","Data":"ba5628ff55de159146d99958bc3b40b38d9d88518e4ae673999ad2f8bb48c5e0"} Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.704050 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kghh7"] Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.709131 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.714782 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kghh7"] Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.815158 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-utilities\") pod \"community-operators-kghh7\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.815598 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-catalog-content\") pod \"community-operators-kghh7\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.815952 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7szv\" (UniqueName: \"kubernetes.io/projected/7b804625-3e54-4745-927e-7c678ec22f82-kube-api-access-p7szv\") pod \"community-operators-kghh7\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.917992 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-utilities\") pod \"community-operators-kghh7\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.918148 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-catalog-content\") pod \"community-operators-kghh7\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.918253 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7szv\" (UniqueName: \"kubernetes.io/projected/7b804625-3e54-4745-927e-7c678ec22f82-kube-api-access-p7szv\") pod \"community-operators-kghh7\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.918565 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-catalog-content\") pod \"community-operators-kghh7\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.918578 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-utilities\") pod \"community-operators-kghh7\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:37 crc kubenswrapper[4728]: I0204 12:03:37.938540 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7szv\" (UniqueName: \"kubernetes.io/projected/7b804625-3e54-4745-927e-7c678ec22f82-kube-api-access-p7szv\") pod \"community-operators-kghh7\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:38 crc kubenswrapper[4728]: I0204 12:03:38.030851 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:38 crc kubenswrapper[4728]: I0204 12:03:38.659170 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kghh7"] Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.107348 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.261474 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x55wb\" (UniqueName: \"kubernetes.io/projected/aee1c19a-e4cd-4457-a13a-434722c5d516-kube-api-access-x55wb\") pod \"aee1c19a-e4cd-4457-a13a-434722c5d516\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.261524 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-nova-metadata-neutron-config-0\") pod \"aee1c19a-e4cd-4457-a13a-434722c5d516\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.261574 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-metadata-combined-ca-bundle\") pod \"aee1c19a-e4cd-4457-a13a-434722c5d516\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.261613 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-inventory\") pod \"aee1c19a-e4cd-4457-a13a-434722c5d516\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.261643 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-ssh-key-openstack-edpm-ipam\") pod \"aee1c19a-e4cd-4457-a13a-434722c5d516\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.261682 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-ovn-metadata-agent-neutron-config-0\") pod \"aee1c19a-e4cd-4457-a13a-434722c5d516\" (UID: \"aee1c19a-e4cd-4457-a13a-434722c5d516\") " Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.267907 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "aee1c19a-e4cd-4457-a13a-434722c5d516" (UID: "aee1c19a-e4cd-4457-a13a-434722c5d516"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.271636 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee1c19a-e4cd-4457-a13a-434722c5d516-kube-api-access-x55wb" (OuterVolumeSpecName: "kube-api-access-x55wb") pod "aee1c19a-e4cd-4457-a13a-434722c5d516" (UID: "aee1c19a-e4cd-4457-a13a-434722c5d516"). InnerVolumeSpecName "kube-api-access-x55wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.291236 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "aee1c19a-e4cd-4457-a13a-434722c5d516" (UID: "aee1c19a-e4cd-4457-a13a-434722c5d516"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.291335 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-inventory" (OuterVolumeSpecName: "inventory") pod "aee1c19a-e4cd-4457-a13a-434722c5d516" (UID: "aee1c19a-e4cd-4457-a13a-434722c5d516"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.310410 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "aee1c19a-e4cd-4457-a13a-434722c5d516" (UID: "aee1c19a-e4cd-4457-a13a-434722c5d516"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.311669 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aee1c19a-e4cd-4457-a13a-434722c5d516" (UID: "aee1c19a-e4cd-4457-a13a-434722c5d516"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.364845 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x55wb\" (UniqueName: \"kubernetes.io/projected/aee1c19a-e4cd-4457-a13a-434722c5d516-kube-api-access-x55wb\") on node \"crc\" DevicePath \"\"" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.364928 4728 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.364958 4728 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.364995 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.365023 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.365051 4728 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/aee1c19a-e4cd-4457-a13a-434722c5d516-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.693042 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b804625-3e54-4745-927e-7c678ec22f82" containerID="8eac5a98569c5f4758ff4e0decdebec25620a210b23d56f2ff5cc2a48e289bba" exitCode=0 Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.693113 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kghh7" event={"ID":"7b804625-3e54-4745-927e-7c678ec22f82","Type":"ContainerDied","Data":"8eac5a98569c5f4758ff4e0decdebec25620a210b23d56f2ff5cc2a48e289bba"} Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.693142 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kghh7" event={"ID":"7b804625-3e54-4745-927e-7c678ec22f82","Type":"ContainerStarted","Data":"1e0750a9883004e0ede9d8b53321fb9e2a03924fa2ec1999bdc944e438ba2fed"} Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.696315 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" event={"ID":"aee1c19a-e4cd-4457-a13a-434722c5d516","Type":"ContainerDied","Data":"994b2e1362f943f2b589111c6ece9860cd2cd7be8c8ed5261623b405a5cc096e"} Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.696367 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="994b2e1362f943f2b589111c6ece9860cd2cd7be8c8ed5261623b405a5cc096e" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.696388 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.798025 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk"] Feb 04 12:03:39 crc kubenswrapper[4728]: E0204 12:03:39.798405 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee1c19a-e4cd-4457-a13a-434722c5d516" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.798422 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee1c19a-e4cd-4457-a13a-434722c5d516" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.798626 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee1c19a-e4cd-4457-a13a-434722c5d516" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.799250 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.801396 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.801858 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.801987 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.802615 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.802733 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.815570 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk"] Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.977912 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnkmp\" (UniqueName: \"kubernetes.io/projected/2350eb57-6059-4c44-8213-0472e0295ae5-kube-api-access-fnkmp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.978161 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.978603 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.978668 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:39 crc kubenswrapper[4728]: I0204 12:03:39.978710 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.079950 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.080080 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.080101 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.080123 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.080143 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnkmp\" (UniqueName: \"kubernetes.io/projected/2350eb57-6059-4c44-8213-0472e0295ae5-kube-api-access-fnkmp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.086305 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.086668 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.087023 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.088012 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.105583 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnkmp\" (UniqueName: \"kubernetes.io/projected/2350eb57-6059-4c44-8213-0472e0295ae5-kube-api-access-fnkmp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.115278 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.639710 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk"] Feb 04 12:03:40 crc kubenswrapper[4728]: W0204 12:03:40.644975 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2350eb57_6059_4c44_8213_0472e0295ae5.slice/crio-6a51387664bac1fe170f32360d691aa550a1c16d9838a0d1751396a12541bae3 WatchSource:0}: Error finding container 6a51387664bac1fe170f32360d691aa550a1c16d9838a0d1751396a12541bae3: Status 404 returned error can't find the container with id 6a51387664bac1fe170f32360d691aa550a1c16d9838a0d1751396a12541bae3 Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.710241 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kghh7" event={"ID":"7b804625-3e54-4745-927e-7c678ec22f82","Type":"ContainerStarted","Data":"604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3"} Feb 04 12:03:40 crc kubenswrapper[4728]: I0204 12:03:40.713403 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" event={"ID":"2350eb57-6059-4c44-8213-0472e0295ae5","Type":"ContainerStarted","Data":"6a51387664bac1fe170f32360d691aa550a1c16d9838a0d1751396a12541bae3"} Feb 04 12:03:41 crc kubenswrapper[4728]: I0204 12:03:41.723945 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" event={"ID":"2350eb57-6059-4c44-8213-0472e0295ae5","Type":"ContainerStarted","Data":"1c03a6e9d9f8dd11d309c4c58b46bdf4c628b8f0313c3e9a82a9520b639ff9d1"} Feb 04 12:03:41 crc kubenswrapper[4728]: I0204 12:03:41.726325 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b804625-3e54-4745-927e-7c678ec22f82" containerID="604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3" exitCode=0 Feb 04 12:03:41 crc kubenswrapper[4728]: I0204 12:03:41.726363 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kghh7" event={"ID":"7b804625-3e54-4745-927e-7c678ec22f82","Type":"ContainerDied","Data":"604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3"} Feb 04 12:03:41 crc kubenswrapper[4728]: I0204 12:03:41.744627 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" podStartSLOduration=2.307402313 podStartE2EDuration="2.744612648s" podCreationTimestamp="2026-02-04 12:03:39 +0000 UTC" firstStartedPulling="2026-02-04 12:03:40.646654085 +0000 UTC m=+2169.789358470" lastFinishedPulling="2026-02-04 12:03:41.08386438 +0000 UTC m=+2170.226568805" observedRunningTime="2026-02-04 12:03:41.741265296 +0000 UTC m=+2170.883969691" watchObservedRunningTime="2026-02-04 12:03:41.744612648 +0000 UTC m=+2170.887317033" Feb 04 12:03:42 crc kubenswrapper[4728]: I0204 12:03:42.743171 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kghh7" event={"ID":"7b804625-3e54-4745-927e-7c678ec22f82","Type":"ContainerStarted","Data":"d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1"} Feb 04 12:03:42 crc kubenswrapper[4728]: I0204 12:03:42.762534 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kghh7" podStartSLOduration=3.068548356 podStartE2EDuration="5.762513558s" podCreationTimestamp="2026-02-04 12:03:37 +0000 UTC" firstStartedPulling="2026-02-04 12:03:39.695670168 +0000 UTC m=+2168.838374563" lastFinishedPulling="2026-02-04 12:03:42.38963534 +0000 UTC m=+2171.532339765" observedRunningTime="2026-02-04 12:03:42.758778525 +0000 UTC m=+2171.901482910" watchObservedRunningTime="2026-02-04 12:03:42.762513558 +0000 UTC m=+2171.905217963" Feb 04 12:03:48 crc kubenswrapper[4728]: I0204 12:03:48.031520 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:48 crc kubenswrapper[4728]: I0204 12:03:48.032187 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:48 crc kubenswrapper[4728]: I0204 12:03:48.074924 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:48 crc kubenswrapper[4728]: I0204 12:03:48.836760 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:48 crc kubenswrapper[4728]: I0204 12:03:48.889534 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kghh7"] Feb 04 12:03:50 crc kubenswrapper[4728]: I0204 12:03:50.805913 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kghh7" podUID="7b804625-3e54-4745-927e-7c678ec22f82" containerName="registry-server" containerID="cri-o://d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1" gracePeriod=2 Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.288094 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.409945 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-catalog-content\") pod \"7b804625-3e54-4745-927e-7c678ec22f82\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.410362 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7szv\" (UniqueName: \"kubernetes.io/projected/7b804625-3e54-4745-927e-7c678ec22f82-kube-api-access-p7szv\") pod \"7b804625-3e54-4745-927e-7c678ec22f82\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.410631 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-utilities\") pod \"7b804625-3e54-4745-927e-7c678ec22f82\" (UID: \"7b804625-3e54-4745-927e-7c678ec22f82\") " Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.411560 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-utilities" (OuterVolumeSpecName: "utilities") pod "7b804625-3e54-4745-927e-7c678ec22f82" (UID: "7b804625-3e54-4745-927e-7c678ec22f82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.417251 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b804625-3e54-4745-927e-7c678ec22f82-kube-api-access-p7szv" (OuterVolumeSpecName: "kube-api-access-p7szv") pod "7b804625-3e54-4745-927e-7c678ec22f82" (UID: "7b804625-3e54-4745-927e-7c678ec22f82"). InnerVolumeSpecName "kube-api-access-p7szv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.513435 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7szv\" (UniqueName: \"kubernetes.io/projected/7b804625-3e54-4745-927e-7c678ec22f82-kube-api-access-p7szv\") on node \"crc\" DevicePath \"\"" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.513480 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.814987 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b804625-3e54-4745-927e-7c678ec22f82" containerID="d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1" exitCode=0 Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.815036 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kghh7" event={"ID":"7b804625-3e54-4745-927e-7c678ec22f82","Type":"ContainerDied","Data":"d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1"} Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.815066 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kghh7" event={"ID":"7b804625-3e54-4745-927e-7c678ec22f82","Type":"ContainerDied","Data":"1e0750a9883004e0ede9d8b53321fb9e2a03924fa2ec1999bdc944e438ba2fed"} Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.815105 4728 scope.go:117] "RemoveContainer" containerID="d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.815887 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kghh7" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.839229 4728 scope.go:117] "RemoveContainer" containerID="604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.867911 4728 scope.go:117] "RemoveContainer" containerID="8eac5a98569c5f4758ff4e0decdebec25620a210b23d56f2ff5cc2a48e289bba" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.917307 4728 scope.go:117] "RemoveContainer" containerID="d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1" Feb 04 12:03:51 crc kubenswrapper[4728]: E0204 12:03:51.919150 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1\": container with ID starting with d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1 not found: ID does not exist" containerID="d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.919205 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1"} err="failed to get container status \"d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1\": rpc error: code = NotFound desc = could not find container \"d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1\": container with ID starting with d6149d4ef32dbe58e0b006f2a1ad1820cac64763a7d67226a2ae19565fe954d1 not found: ID does not exist" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.919234 4728 scope.go:117] "RemoveContainer" containerID="604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3" Feb 04 12:03:51 crc kubenswrapper[4728]: E0204 12:03:51.919561 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3\": container with ID starting with 604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3 not found: ID does not exist" containerID="604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.919583 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3"} err="failed to get container status \"604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3\": rpc error: code = NotFound desc = could not find container \"604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3\": container with ID starting with 604b025dfca3d35699b6edad30d808c2ca2f3362320ab3257e76acb7eea8a9f3 not found: ID does not exist" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.919599 4728 scope.go:117] "RemoveContainer" containerID="8eac5a98569c5f4758ff4e0decdebec25620a210b23d56f2ff5cc2a48e289bba" Feb 04 12:03:51 crc kubenswrapper[4728]: E0204 12:03:51.919867 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eac5a98569c5f4758ff4e0decdebec25620a210b23d56f2ff5cc2a48e289bba\": container with ID starting with 8eac5a98569c5f4758ff4e0decdebec25620a210b23d56f2ff5cc2a48e289bba not found: ID does not exist" containerID="8eac5a98569c5f4758ff4e0decdebec25620a210b23d56f2ff5cc2a48e289bba" Feb 04 12:03:51 crc kubenswrapper[4728]: I0204 12:03:51.919918 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eac5a98569c5f4758ff4e0decdebec25620a210b23d56f2ff5cc2a48e289bba"} err="failed to get container status \"8eac5a98569c5f4758ff4e0decdebec25620a210b23d56f2ff5cc2a48e289bba\": rpc error: code = NotFound desc = could not find container \"8eac5a98569c5f4758ff4e0decdebec25620a210b23d56f2ff5cc2a48e289bba\": container with ID starting with 8eac5a98569c5f4758ff4e0decdebec25620a210b23d56f2ff5cc2a48e289bba not found: ID does not exist" Feb 04 12:03:52 crc kubenswrapper[4728]: I0204 12:03:52.030392 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b804625-3e54-4745-927e-7c678ec22f82" (UID: "7b804625-3e54-4745-927e-7c678ec22f82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:03:52 crc kubenswrapper[4728]: I0204 12:03:52.124528 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b804625-3e54-4745-927e-7c678ec22f82-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:03:52 crc kubenswrapper[4728]: I0204 12:03:52.150329 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kghh7"] Feb 04 12:03:52 crc kubenswrapper[4728]: I0204 12:03:52.158417 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kghh7"] Feb 04 12:03:53 crc kubenswrapper[4728]: I0204 12:03:53.565791 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b804625-3e54-4745-927e-7c678ec22f82" path="/var/lib/kubelet/pods/7b804625-3e54-4745-927e-7c678ec22f82/volumes" Feb 04 12:04:00 crc kubenswrapper[4728]: I0204 12:04:00.308393 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-84679c4c57-hc428" podUID="58949fe9-f572-4c71-80c8-925cee89421e" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 04 12:04:05 crc kubenswrapper[4728]: I0204 12:04:05.448322 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:04:05 crc kubenswrapper[4728]: I0204 12:04:05.449031 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:04:35 crc kubenswrapper[4728]: I0204 12:04:35.448915 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:04:35 crc kubenswrapper[4728]: I0204 12:04:35.449644 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:04:35 crc kubenswrapper[4728]: I0204 12:04:35.449736 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 12:04:35 crc kubenswrapper[4728]: I0204 12:04:35.451177 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 12:04:35 crc kubenswrapper[4728]: I0204 12:04:35.451411 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" gracePeriod=600 Feb 04 12:04:35 crc kubenswrapper[4728]: E0204 12:04:35.593334 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:04:36 crc kubenswrapper[4728]: I0204 12:04:36.218617 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" exitCode=0 Feb 04 12:04:36 crc kubenswrapper[4728]: I0204 12:04:36.218673 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860"} Feb 04 12:04:36 crc kubenswrapper[4728]: I0204 12:04:36.218732 4728 scope.go:117] "RemoveContainer" containerID="529af0f7f966a4ea0b6e4a1f05c7ef144a460c0249245b7c950d3e46bc1f0c22" Feb 04 12:04:36 crc kubenswrapper[4728]: I0204 12:04:36.219683 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:04:36 crc kubenswrapper[4728]: E0204 12:04:36.219961 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:04:48 crc kubenswrapper[4728]: I0204 12:04:48.553866 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:04:48 crc kubenswrapper[4728]: E0204 12:04:48.554673 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:05:02 crc kubenswrapper[4728]: I0204 12:05:02.554700 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:05:02 crc kubenswrapper[4728]: E0204 12:05:02.555546 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:05:16 crc kubenswrapper[4728]: I0204 12:05:16.554876 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:05:16 crc kubenswrapper[4728]: E0204 12:05:16.556045 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:05:30 crc kubenswrapper[4728]: I0204 12:05:30.554576 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:05:30 crc kubenswrapper[4728]: E0204 12:05:30.555492 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:05:45 crc kubenswrapper[4728]: I0204 12:05:45.554009 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:05:45 crc kubenswrapper[4728]: E0204 12:05:45.554899 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:06:00 crc kubenswrapper[4728]: I0204 12:06:00.553626 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:06:00 crc kubenswrapper[4728]: E0204 12:06:00.554342 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:06:12 crc kubenswrapper[4728]: I0204 12:06:12.553881 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:06:12 crc kubenswrapper[4728]: E0204 12:06:12.554785 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:06:25 crc kubenswrapper[4728]: I0204 12:06:25.554800 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:06:25 crc kubenswrapper[4728]: E0204 12:06:25.555873 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:06:36 crc kubenswrapper[4728]: I0204 12:06:36.554529 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:06:36 crc kubenswrapper[4728]: E0204 12:06:36.555344 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:06:48 crc kubenswrapper[4728]: I0204 12:06:48.553399 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:06:48 crc kubenswrapper[4728]: E0204 12:06:48.554172 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:06:59 crc kubenswrapper[4728]: I0204 12:06:59.553515 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:06:59 crc kubenswrapper[4728]: E0204 12:06:59.554542 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:07:10 crc kubenswrapper[4728]: I0204 12:07:10.553888 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:07:10 crc kubenswrapper[4728]: E0204 12:07:10.554555 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:07:23 crc kubenswrapper[4728]: I0204 12:07:23.554921 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:07:23 crc kubenswrapper[4728]: E0204 12:07:23.555614 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:07:35 crc kubenswrapper[4728]: I0204 12:07:35.893871 4728 generic.go:334] "Generic (PLEG): container finished" podID="2350eb57-6059-4c44-8213-0472e0295ae5" containerID="1c03a6e9d9f8dd11d309c4c58b46bdf4c628b8f0313c3e9a82a9520b639ff9d1" exitCode=0 Feb 04 12:07:35 crc kubenswrapper[4728]: I0204 12:07:35.893956 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" event={"ID":"2350eb57-6059-4c44-8213-0472e0295ae5","Type":"ContainerDied","Data":"1c03a6e9d9f8dd11d309c4c58b46bdf4c628b8f0313c3e9a82a9520b639ff9d1"} Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.354851 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.460579 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-secret-0\") pod \"2350eb57-6059-4c44-8213-0472e0295ae5\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.460693 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-inventory\") pod \"2350eb57-6059-4c44-8213-0472e0295ae5\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.460718 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-ssh-key-openstack-edpm-ipam\") pod \"2350eb57-6059-4c44-8213-0472e0295ae5\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.460932 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-combined-ca-bundle\") pod \"2350eb57-6059-4c44-8213-0472e0295ae5\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.460958 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnkmp\" (UniqueName: \"kubernetes.io/projected/2350eb57-6059-4c44-8213-0472e0295ae5-kube-api-access-fnkmp\") pod \"2350eb57-6059-4c44-8213-0472e0295ae5\" (UID: \"2350eb57-6059-4c44-8213-0472e0295ae5\") " Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.465951 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2350eb57-6059-4c44-8213-0472e0295ae5-kube-api-access-fnkmp" (OuterVolumeSpecName: "kube-api-access-fnkmp") pod "2350eb57-6059-4c44-8213-0472e0295ae5" (UID: "2350eb57-6059-4c44-8213-0472e0295ae5"). InnerVolumeSpecName "kube-api-access-fnkmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.466169 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2350eb57-6059-4c44-8213-0472e0295ae5" (UID: "2350eb57-6059-4c44-8213-0472e0295ae5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.488064 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2350eb57-6059-4c44-8213-0472e0295ae5" (UID: "2350eb57-6059-4c44-8213-0472e0295ae5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.489857 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-inventory" (OuterVolumeSpecName: "inventory") pod "2350eb57-6059-4c44-8213-0472e0295ae5" (UID: "2350eb57-6059-4c44-8213-0472e0295ae5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.490975 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "2350eb57-6059-4c44-8213-0472e0295ae5" (UID: "2350eb57-6059-4c44-8213-0472e0295ae5"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.562629 4728 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.562874 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.562884 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.562896 4728 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2350eb57-6059-4c44-8213-0472e0295ae5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.562905 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnkmp\" (UniqueName: \"kubernetes.io/projected/2350eb57-6059-4c44-8213-0472e0295ae5-kube-api-access-fnkmp\") on node \"crc\" DevicePath \"\"" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.912503 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" event={"ID":"2350eb57-6059-4c44-8213-0472e0295ae5","Type":"ContainerDied","Data":"6a51387664bac1fe170f32360d691aa550a1c16d9838a0d1751396a12541bae3"} Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.912542 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a51387664bac1fe170f32360d691aa550a1c16d9838a0d1751396a12541bae3" Feb 04 12:07:37 crc kubenswrapper[4728]: I0204 12:07:37.912612 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.005198 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4"] Feb 04 12:07:38 crc kubenswrapper[4728]: E0204 12:07:38.005572 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b804625-3e54-4745-927e-7c678ec22f82" containerName="registry-server" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.005598 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b804625-3e54-4745-927e-7c678ec22f82" containerName="registry-server" Feb 04 12:07:38 crc kubenswrapper[4728]: E0204 12:07:38.005630 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2350eb57-6059-4c44-8213-0472e0295ae5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.005637 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2350eb57-6059-4c44-8213-0472e0295ae5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 04 12:07:38 crc kubenswrapper[4728]: E0204 12:07:38.005648 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b804625-3e54-4745-927e-7c678ec22f82" containerName="extract-utilities" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.005654 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b804625-3e54-4745-927e-7c678ec22f82" containerName="extract-utilities" Feb 04 12:07:38 crc kubenswrapper[4728]: E0204 12:07:38.005661 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b804625-3e54-4745-927e-7c678ec22f82" containerName="extract-content" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.005669 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b804625-3e54-4745-927e-7c678ec22f82" containerName="extract-content" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.005847 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b804625-3e54-4745-927e-7c678ec22f82" containerName="registry-server" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.005881 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2350eb57-6059-4c44-8213-0472e0295ae5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.006595 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.012524 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.012999 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.013017 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.013138 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.013191 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.013264 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.015384 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.019930 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4"] Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.087790 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.087870 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.087893 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.087921 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.087987 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.088010 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.088242 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm7p4\" (UniqueName: \"kubernetes.io/projected/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-kube-api-access-xm7p4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.088290 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.088331 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.190944 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.191081 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.191123 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.191202 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm7p4\" (UniqueName: \"kubernetes.io/projected/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-kube-api-access-xm7p4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.191230 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.191257 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.191294 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.191340 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.191361 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.192415 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.194848 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.195255 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.195277 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.195366 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.195665 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.196411 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.197098 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.207767 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm7p4\" (UniqueName: \"kubernetes.io/projected/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-kube-api-access-xm7p4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qsdd4\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.324130 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.635296 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:07:38 crc kubenswrapper[4728]: E0204 12:07:38.635905 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.767684 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4"] Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.776422 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 12:07:38 crc kubenswrapper[4728]: I0204 12:07:38.920706 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" event={"ID":"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2","Type":"ContainerStarted","Data":"4e6d12feccbc62e48ab0f6a799a867d6347352ef729e5e3b895c5e4c3f2689f5"} Feb 04 12:07:39 crc kubenswrapper[4728]: I0204 12:07:39.928824 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" event={"ID":"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2","Type":"ContainerStarted","Data":"4a01c47e7bc3f5b5b5dec98b485249838438fd6dbe3a990e398f32156b561b8f"} Feb 04 12:07:39 crc kubenswrapper[4728]: I0204 12:07:39.944175 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" podStartSLOduration=2.355270413 podStartE2EDuration="2.94415716s" podCreationTimestamp="2026-02-04 12:07:37 +0000 UTC" firstStartedPulling="2026-02-04 12:07:38.776171174 +0000 UTC m=+2407.918875569" lastFinishedPulling="2026-02-04 12:07:39.365057931 +0000 UTC m=+2408.507762316" observedRunningTime="2026-02-04 12:07:39.942822088 +0000 UTC m=+2409.085526493" watchObservedRunningTime="2026-02-04 12:07:39.94415716 +0000 UTC m=+2409.086861545" Feb 04 12:07:52 crc kubenswrapper[4728]: I0204 12:07:52.554069 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:07:52 crc kubenswrapper[4728]: E0204 12:07:52.554883 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:08:04 crc kubenswrapper[4728]: I0204 12:08:04.554061 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:08:04 crc kubenswrapper[4728]: E0204 12:08:04.554907 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:08:19 crc kubenswrapper[4728]: I0204 12:08:19.554487 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:08:19 crc kubenswrapper[4728]: E0204 12:08:19.561422 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:08:32 crc kubenswrapper[4728]: I0204 12:08:32.554569 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:08:32 crc kubenswrapper[4728]: E0204 12:08:32.556145 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:08:43 crc kubenswrapper[4728]: I0204 12:08:43.553994 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:08:43 crc kubenswrapper[4728]: E0204 12:08:43.554729 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:08:56 crc kubenswrapper[4728]: I0204 12:08:56.554384 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:08:56 crc kubenswrapper[4728]: E0204 12:08:56.555234 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:09:09 crc kubenswrapper[4728]: I0204 12:09:09.554680 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:09:09 crc kubenswrapper[4728]: E0204 12:09:09.555785 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:09:24 crc kubenswrapper[4728]: I0204 12:09:24.554353 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:09:24 crc kubenswrapper[4728]: E0204 12:09:24.555376 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:09:35 crc kubenswrapper[4728]: I0204 12:09:35.554261 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:09:35 crc kubenswrapper[4728]: I0204 12:09:35.925846 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"050fd42d478ece55fcb94795bb9bece6d44a8e884401343e87b6b7c0856343b1"} Feb 04 12:09:41 crc kubenswrapper[4728]: I0204 12:09:41.984518 4728 generic.go:334] "Generic (PLEG): container finished" podID="52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" containerID="4a01c47e7bc3f5b5b5dec98b485249838438fd6dbe3a990e398f32156b561b8f" exitCode=0 Feb 04 12:09:41 crc kubenswrapper[4728]: I0204 12:09:41.984595 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" event={"ID":"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2","Type":"ContainerDied","Data":"4a01c47e7bc3f5b5b5dec98b485249838438fd6dbe3a990e398f32156b561b8f"} Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.444941 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.533994 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-ssh-key-openstack-edpm-ipam\") pod \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.534151 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-0\") pod \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.534187 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-inventory\") pod \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.534243 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-1\") pod \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.534275 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-1\") pod \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.534295 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-0\") pod \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.534393 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-combined-ca-bundle\") pod \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.534434 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm7p4\" (UniqueName: \"kubernetes.io/projected/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-kube-api-access-xm7p4\") pod \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.534458 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-extra-config-0\") pod \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\" (UID: \"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2\") " Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.550001 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-kube-api-access-xm7p4" (OuterVolumeSpecName: "kube-api-access-xm7p4") pod "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" (UID: "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2"). InnerVolumeSpecName "kube-api-access-xm7p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.551929 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" (UID: "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.575380 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" (UID: "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.578937 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" (UID: "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.587934 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" (UID: "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.588392 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" (UID: "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.589242 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" (UID: "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.598502 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-inventory" (OuterVolumeSpecName: "inventory") pod "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" (UID: "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.609753 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" (UID: "52c00cc4-57a9-41e7-99f0-1cc6fa0442e2"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.639408 4728 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.639437 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm7p4\" (UniqueName: \"kubernetes.io/projected/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-kube-api-access-xm7p4\") on node \"crc\" DevicePath \"\"" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.639446 4728 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.639455 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.639463 4728 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.639472 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.639481 4728 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.639497 4728 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 04 12:09:43 crc kubenswrapper[4728]: I0204 12:09:43.639505 4728 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/52c00cc4-57a9-41e7-99f0-1cc6fa0442e2-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.005578 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" event={"ID":"52c00cc4-57a9-41e7-99f0-1cc6fa0442e2","Type":"ContainerDied","Data":"4e6d12feccbc62e48ab0f6a799a867d6347352ef729e5e3b895c5e4c3f2689f5"} Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.005622 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e6d12feccbc62e48ab0f6a799a867d6347352ef729e5e3b895c5e4c3f2689f5" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.005726 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qsdd4" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.112176 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz"] Feb 04 12:09:44 crc kubenswrapper[4728]: E0204 12:09:44.112681 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.112705 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.113106 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="52c00cc4-57a9-41e7-99f0-1cc6fa0442e2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.113892 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.117197 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.117365 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dbscq" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.117875 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.118141 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.120459 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.130370 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz"] Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.147485 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w9pk\" (UniqueName: \"kubernetes.io/projected/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-kube-api-access-7w9pk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.147583 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.147637 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.147928 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.147974 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.147992 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.148023 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.248598 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.248666 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.248687 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.248719 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.248741 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w9pk\" (UniqueName: \"kubernetes.io/projected/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-kube-api-access-7w9pk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.248803 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.248828 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.252890 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.255213 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.256538 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.256626 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.256736 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.256855 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.268389 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w9pk\" (UniqueName: \"kubernetes.io/projected/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-kube-api-access-7w9pk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.432522 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:09:44 crc kubenswrapper[4728]: I0204 12:09:44.969993 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz"] Feb 04 12:09:45 crc kubenswrapper[4728]: I0204 12:09:45.015365 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" event={"ID":"4bf728e8-5290-463c-bd6a-194f0ddb3c5d","Type":"ContainerStarted","Data":"37d15160f0cd51978f17402e036097e7b643c869c9871e6b4beba8b5d1f4bb0f"} Feb 04 12:09:47 crc kubenswrapper[4728]: I0204 12:09:47.042317 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" event={"ID":"4bf728e8-5290-463c-bd6a-194f0ddb3c5d","Type":"ContainerStarted","Data":"cb61d9998d4889510888fbcd93c80a6aaaf40edcacf77c3953517568be4349a4"} Feb 04 12:09:47 crc kubenswrapper[4728]: I0204 12:09:47.062792 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" podStartSLOduration=2.285227985 podStartE2EDuration="3.062770978s" podCreationTimestamp="2026-02-04 12:09:44 +0000 UTC" firstStartedPulling="2026-02-04 12:09:44.975462944 +0000 UTC m=+2534.118167329" lastFinishedPulling="2026-02-04 12:09:45.753005937 +0000 UTC m=+2534.895710322" observedRunningTime="2026-02-04 12:09:47.058295119 +0000 UTC m=+2536.200999504" watchObservedRunningTime="2026-02-04 12:09:47.062770978 +0000 UTC m=+2536.205475363" Feb 04 12:10:31 crc kubenswrapper[4728]: I0204 12:10:31.853972 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qvrqr"] Feb 04 12:10:31 crc kubenswrapper[4728]: I0204 12:10:31.857490 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:31 crc kubenswrapper[4728]: I0204 12:10:31.873587 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvrqr"] Feb 04 12:10:31 crc kubenswrapper[4728]: I0204 12:10:31.969076 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-catalog-content\") pod \"redhat-operators-qvrqr\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:31 crc kubenswrapper[4728]: I0204 12:10:31.969229 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-utilities\") pod \"redhat-operators-qvrqr\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:31 crc kubenswrapper[4728]: I0204 12:10:31.969251 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9jjf\" (UniqueName: \"kubernetes.io/projected/67c7f22c-65c3-47d0-b463-85771ffbd349-kube-api-access-x9jjf\") pod \"redhat-operators-qvrqr\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:32 crc kubenswrapper[4728]: I0204 12:10:32.071564 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-utilities\") pod \"redhat-operators-qvrqr\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:32 crc kubenswrapper[4728]: I0204 12:10:32.071630 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9jjf\" (UniqueName: \"kubernetes.io/projected/67c7f22c-65c3-47d0-b463-85771ffbd349-kube-api-access-x9jjf\") pod \"redhat-operators-qvrqr\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:32 crc kubenswrapper[4728]: I0204 12:10:32.071739 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-catalog-content\") pod \"redhat-operators-qvrqr\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:32 crc kubenswrapper[4728]: I0204 12:10:32.072056 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-utilities\") pod \"redhat-operators-qvrqr\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:32 crc kubenswrapper[4728]: I0204 12:10:32.072184 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-catalog-content\") pod \"redhat-operators-qvrqr\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:32 crc kubenswrapper[4728]: I0204 12:10:32.096513 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9jjf\" (UniqueName: \"kubernetes.io/projected/67c7f22c-65c3-47d0-b463-85771ffbd349-kube-api-access-x9jjf\") pod \"redhat-operators-qvrqr\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:32 crc kubenswrapper[4728]: I0204 12:10:32.189992 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:32 crc kubenswrapper[4728]: I0204 12:10:32.666686 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvrqr"] Feb 04 12:10:33 crc kubenswrapper[4728]: I0204 12:10:33.436556 4728 generic.go:334] "Generic (PLEG): container finished" podID="67c7f22c-65c3-47d0-b463-85771ffbd349" containerID="9cf9e3ea2be1675093f9a374c945669aeb73446518d545b84d48881c4ced0db8" exitCode=0 Feb 04 12:10:33 crc kubenswrapper[4728]: I0204 12:10:33.436662 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvrqr" event={"ID":"67c7f22c-65c3-47d0-b463-85771ffbd349","Type":"ContainerDied","Data":"9cf9e3ea2be1675093f9a374c945669aeb73446518d545b84d48881c4ced0db8"} Feb 04 12:10:33 crc kubenswrapper[4728]: I0204 12:10:33.436969 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvrqr" event={"ID":"67c7f22c-65c3-47d0-b463-85771ffbd349","Type":"ContainerStarted","Data":"6e45f950e3df631d51f27add21b7aaf59aa6b4d586c3357b9ee2da0dd1806075"} Feb 04 12:10:34 crc kubenswrapper[4728]: I0204 12:10:34.446822 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvrqr" event={"ID":"67c7f22c-65c3-47d0-b463-85771ffbd349","Type":"ContainerStarted","Data":"bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95"} Feb 04 12:10:35 crc kubenswrapper[4728]: I0204 12:10:35.463763 4728 generic.go:334] "Generic (PLEG): container finished" podID="67c7f22c-65c3-47d0-b463-85771ffbd349" containerID="bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95" exitCode=0 Feb 04 12:10:35 crc kubenswrapper[4728]: I0204 12:10:35.463808 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvrqr" event={"ID":"67c7f22c-65c3-47d0-b463-85771ffbd349","Type":"ContainerDied","Data":"bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95"} Feb 04 12:10:36 crc kubenswrapper[4728]: I0204 12:10:36.475061 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvrqr" event={"ID":"67c7f22c-65c3-47d0-b463-85771ffbd349","Type":"ContainerStarted","Data":"f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6"} Feb 04 12:10:36 crc kubenswrapper[4728]: I0204 12:10:36.499586 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qvrqr" podStartSLOduration=2.881886196 podStartE2EDuration="5.499558202s" podCreationTimestamp="2026-02-04 12:10:31 +0000 UTC" firstStartedPulling="2026-02-04 12:10:33.43917196 +0000 UTC m=+2582.581876345" lastFinishedPulling="2026-02-04 12:10:36.056843926 +0000 UTC m=+2585.199548351" observedRunningTime="2026-02-04 12:10:36.493612617 +0000 UTC m=+2585.636317002" watchObservedRunningTime="2026-02-04 12:10:36.499558202 +0000 UTC m=+2585.642262597" Feb 04 12:10:42 crc kubenswrapper[4728]: I0204 12:10:42.190879 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:42 crc kubenswrapper[4728]: I0204 12:10:42.191601 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:42 crc kubenswrapper[4728]: I0204 12:10:42.239330 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:42 crc kubenswrapper[4728]: I0204 12:10:42.582704 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:43 crc kubenswrapper[4728]: I0204 12:10:43.761373 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvrqr"] Feb 04 12:10:44 crc kubenswrapper[4728]: I0204 12:10:44.542927 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qvrqr" podUID="67c7f22c-65c3-47d0-b463-85771ffbd349" containerName="registry-server" containerID="cri-o://f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6" gracePeriod=2 Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.051358 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.124078 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9jjf\" (UniqueName: \"kubernetes.io/projected/67c7f22c-65c3-47d0-b463-85771ffbd349-kube-api-access-x9jjf\") pod \"67c7f22c-65c3-47d0-b463-85771ffbd349\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.124154 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-utilities\") pod \"67c7f22c-65c3-47d0-b463-85771ffbd349\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.124229 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-catalog-content\") pod \"67c7f22c-65c3-47d0-b463-85771ffbd349\" (UID: \"67c7f22c-65c3-47d0-b463-85771ffbd349\") " Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.126046 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-utilities" (OuterVolumeSpecName: "utilities") pod "67c7f22c-65c3-47d0-b463-85771ffbd349" (UID: "67c7f22c-65c3-47d0-b463-85771ffbd349"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.130576 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c7f22c-65c3-47d0-b463-85771ffbd349-kube-api-access-x9jjf" (OuterVolumeSpecName: "kube-api-access-x9jjf") pod "67c7f22c-65c3-47d0-b463-85771ffbd349" (UID: "67c7f22c-65c3-47d0-b463-85771ffbd349"). InnerVolumeSpecName "kube-api-access-x9jjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.227060 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9jjf\" (UniqueName: \"kubernetes.io/projected/67c7f22c-65c3-47d0-b463-85771ffbd349-kube-api-access-x9jjf\") on node \"crc\" DevicePath \"\"" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.227087 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.284907 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67c7f22c-65c3-47d0-b463-85771ffbd349" (UID: "67c7f22c-65c3-47d0-b463-85771ffbd349"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.328789 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c7f22c-65c3-47d0-b463-85771ffbd349-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.557299 4728 generic.go:334] "Generic (PLEG): container finished" podID="67c7f22c-65c3-47d0-b463-85771ffbd349" containerID="f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6" exitCode=0 Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.557406 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvrqr" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.571228 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvrqr" event={"ID":"67c7f22c-65c3-47d0-b463-85771ffbd349","Type":"ContainerDied","Data":"f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6"} Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.571320 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvrqr" event={"ID":"67c7f22c-65c3-47d0-b463-85771ffbd349","Type":"ContainerDied","Data":"6e45f950e3df631d51f27add21b7aaf59aa6b4d586c3357b9ee2da0dd1806075"} Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.571360 4728 scope.go:117] "RemoveContainer" containerID="f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.595919 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvrqr"] Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.602592 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qvrqr"] Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.608376 4728 scope.go:117] "RemoveContainer" containerID="bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.634224 4728 scope.go:117] "RemoveContainer" containerID="9cf9e3ea2be1675093f9a374c945669aeb73446518d545b84d48881c4ced0db8" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.674212 4728 scope.go:117] "RemoveContainer" containerID="f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6" Feb 04 12:10:45 crc kubenswrapper[4728]: E0204 12:10:45.674996 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6\": container with ID starting with f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6 not found: ID does not exist" containerID="f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.675064 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6"} err="failed to get container status \"f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6\": rpc error: code = NotFound desc = could not find container \"f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6\": container with ID starting with f22b419f864740cb0f900082017dfd7e6fb23ae9766b472b8e408b33b79f15a6 not found: ID does not exist" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.675106 4728 scope.go:117] "RemoveContainer" containerID="bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95" Feb 04 12:10:45 crc kubenswrapper[4728]: E0204 12:10:45.675493 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95\": container with ID starting with bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95 not found: ID does not exist" containerID="bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.675533 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95"} err="failed to get container status \"bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95\": rpc error: code = NotFound desc = could not find container \"bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95\": container with ID starting with bb07fb43ade3578702af7d2ea58ddf05566129fedc1bd4198d53013f479a4f95 not found: ID does not exist" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.675567 4728 scope.go:117] "RemoveContainer" containerID="9cf9e3ea2be1675093f9a374c945669aeb73446518d545b84d48881c4ced0db8" Feb 04 12:10:45 crc kubenswrapper[4728]: E0204 12:10:45.676120 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf9e3ea2be1675093f9a374c945669aeb73446518d545b84d48881c4ced0db8\": container with ID starting with 9cf9e3ea2be1675093f9a374c945669aeb73446518d545b84d48881c4ced0db8 not found: ID does not exist" containerID="9cf9e3ea2be1675093f9a374c945669aeb73446518d545b84d48881c4ced0db8" Feb 04 12:10:45 crc kubenswrapper[4728]: I0204 12:10:45.676180 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf9e3ea2be1675093f9a374c945669aeb73446518d545b84d48881c4ced0db8"} err="failed to get container status \"9cf9e3ea2be1675093f9a374c945669aeb73446518d545b84d48881c4ced0db8\": rpc error: code = NotFound desc = could not find container \"9cf9e3ea2be1675093f9a374c945669aeb73446518d545b84d48881c4ced0db8\": container with ID starting with 9cf9e3ea2be1675093f9a374c945669aeb73446518d545b84d48881c4ced0db8 not found: ID does not exist" Feb 04 12:10:47 crc kubenswrapper[4728]: I0204 12:10:47.567657 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c7f22c-65c3-47d0-b463-85771ffbd349" path="/var/lib/kubelet/pods/67c7f22c-65c3-47d0-b463-85771ffbd349/volumes" Feb 04 12:11:35 crc kubenswrapper[4728]: I0204 12:11:35.448787 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:11:35 crc kubenswrapper[4728]: I0204 12:11:35.449688 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:11:54 crc kubenswrapper[4728]: I0204 12:11:54.192586 4728 generic.go:334] "Generic (PLEG): container finished" podID="4bf728e8-5290-463c-bd6a-194f0ddb3c5d" containerID="cb61d9998d4889510888fbcd93c80a6aaaf40edcacf77c3953517568be4349a4" exitCode=0 Feb 04 12:11:54 crc kubenswrapper[4728]: I0204 12:11:54.192673 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" event={"ID":"4bf728e8-5290-463c-bd6a-194f0ddb3c5d","Type":"ContainerDied","Data":"cb61d9998d4889510888fbcd93c80a6aaaf40edcacf77c3953517568be4349a4"} Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.609033 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.698283 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-2\") pod \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.698426 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-telemetry-combined-ca-bundle\") pod \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.698473 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-inventory\") pod \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.698643 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-1\") pod \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.698706 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ssh-key-openstack-edpm-ipam\") pod \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.698797 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w9pk\" (UniqueName: \"kubernetes.io/projected/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-kube-api-access-7w9pk\") pod \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.698858 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-0\") pod \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\" (UID: \"4bf728e8-5290-463c-bd6a-194f0ddb3c5d\") " Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.704859 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4bf728e8-5290-463c-bd6a-194f0ddb3c5d" (UID: "4bf728e8-5290-463c-bd6a-194f0ddb3c5d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.706445 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-kube-api-access-7w9pk" (OuterVolumeSpecName: "kube-api-access-7w9pk") pod "4bf728e8-5290-463c-bd6a-194f0ddb3c5d" (UID: "4bf728e8-5290-463c-bd6a-194f0ddb3c5d"). InnerVolumeSpecName "kube-api-access-7w9pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.729118 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4bf728e8-5290-463c-bd6a-194f0ddb3c5d" (UID: "4bf728e8-5290-463c-bd6a-194f0ddb3c5d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.732843 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-inventory" (OuterVolumeSpecName: "inventory") pod "4bf728e8-5290-463c-bd6a-194f0ddb3c5d" (UID: "4bf728e8-5290-463c-bd6a-194f0ddb3c5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.734239 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "4bf728e8-5290-463c-bd6a-194f0ddb3c5d" (UID: "4bf728e8-5290-463c-bd6a-194f0ddb3c5d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.736050 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "4bf728e8-5290-463c-bd6a-194f0ddb3c5d" (UID: "4bf728e8-5290-463c-bd6a-194f0ddb3c5d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.737994 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "4bf728e8-5290-463c-bd6a-194f0ddb3c5d" (UID: "4bf728e8-5290-463c-bd6a-194f0ddb3c5d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.803636 4728 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.803677 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.803688 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w9pk\" (UniqueName: \"kubernetes.io/projected/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-kube-api-access-7w9pk\") on node \"crc\" DevicePath \"\"" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.803698 4728 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.803708 4728 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.803719 4728 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:11:55 crc kubenswrapper[4728]: I0204 12:11:55.803730 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bf728e8-5290-463c-bd6a-194f0ddb3c5d-inventory\") on node \"crc\" DevicePath \"\"" Feb 04 12:11:56 crc kubenswrapper[4728]: I0204 12:11:56.210041 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" event={"ID":"4bf728e8-5290-463c-bd6a-194f0ddb3c5d","Type":"ContainerDied","Data":"37d15160f0cd51978f17402e036097e7b643c869c9871e6b4beba8b5d1f4bb0f"} Feb 04 12:11:56 crc kubenswrapper[4728]: I0204 12:11:56.210089 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz" Feb 04 12:11:56 crc kubenswrapper[4728]: I0204 12:11:56.210120 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37d15160f0cd51978f17402e036097e7b643c869c9871e6b4beba8b5d1f4bb0f" Feb 04 12:12:05 crc kubenswrapper[4728]: I0204 12:12:05.448985 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:12:05 crc kubenswrapper[4728]: I0204 12:12:05.449641 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:12:35 crc kubenswrapper[4728]: I0204 12:12:35.447921 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:12:35 crc kubenswrapper[4728]: I0204 12:12:35.448410 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:12:35 crc kubenswrapper[4728]: I0204 12:12:35.448453 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 12:12:35 crc kubenswrapper[4728]: I0204 12:12:35.449144 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"050fd42d478ece55fcb94795bb9bece6d44a8e884401343e87b6b7c0856343b1"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 12:12:35 crc kubenswrapper[4728]: I0204 12:12:35.449197 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://050fd42d478ece55fcb94795bb9bece6d44a8e884401343e87b6b7c0856343b1" gracePeriod=600 Feb 04 12:12:35 crc kubenswrapper[4728]: I0204 12:12:35.591353 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="050fd42d478ece55fcb94795bb9bece6d44a8e884401343e87b6b7c0856343b1" exitCode=0 Feb 04 12:12:35 crc kubenswrapper[4728]: I0204 12:12:35.591417 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"050fd42d478ece55fcb94795bb9bece6d44a8e884401343e87b6b7c0856343b1"} Feb 04 12:12:35 crc kubenswrapper[4728]: I0204 12:12:35.591469 4728 scope.go:117] "RemoveContainer" containerID="cde42b625454b44b9571ad7ad54cf5b87f062f19cf9315ad9b559ef6336ee860" Feb 04 12:12:36 crc kubenswrapper[4728]: I0204 12:12:36.602023 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409"} Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.784501 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9r2d2"] Feb 04 12:13:13 crc kubenswrapper[4728]: E0204 12:13:13.785222 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c7f22c-65c3-47d0-b463-85771ffbd349" containerName="extract-content" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.785233 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c7f22c-65c3-47d0-b463-85771ffbd349" containerName="extract-content" Feb 04 12:13:13 crc kubenswrapper[4728]: E0204 12:13:13.785262 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c7f22c-65c3-47d0-b463-85771ffbd349" containerName="registry-server" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.785269 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c7f22c-65c3-47d0-b463-85771ffbd349" containerName="registry-server" Feb 04 12:13:13 crc kubenswrapper[4728]: E0204 12:13:13.785276 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf728e8-5290-463c-bd6a-194f0ddb3c5d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.785283 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf728e8-5290-463c-bd6a-194f0ddb3c5d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 04 12:13:13 crc kubenswrapper[4728]: E0204 12:13:13.785298 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c7f22c-65c3-47d0-b463-85771ffbd349" containerName="extract-utilities" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.785304 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c7f22c-65c3-47d0-b463-85771ffbd349" containerName="extract-utilities" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.785467 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c7f22c-65c3-47d0-b463-85771ffbd349" containerName="registry-server" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.785488 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf728e8-5290-463c-bd6a-194f0ddb3c5d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.788086 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.789383 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-catalog-content\") pod \"redhat-marketplace-9r2d2\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.789462 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79mv7\" (UniqueName: \"kubernetes.io/projected/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-kube-api-access-79mv7\") pod \"redhat-marketplace-9r2d2\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.789612 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-utilities\") pod \"redhat-marketplace-9r2d2\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.805497 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9r2d2"] Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.890492 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79mv7\" (UniqueName: \"kubernetes.io/projected/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-kube-api-access-79mv7\") pod \"redhat-marketplace-9r2d2\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.890566 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-utilities\") pod \"redhat-marketplace-9r2d2\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.890687 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-catalog-content\") pod \"redhat-marketplace-9r2d2\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.891414 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-catalog-content\") pod \"redhat-marketplace-9r2d2\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.891448 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-utilities\") pod \"redhat-marketplace-9r2d2\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:13 crc kubenswrapper[4728]: I0204 12:13:13.916720 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79mv7\" (UniqueName: \"kubernetes.io/projected/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-kube-api-access-79mv7\") pod \"redhat-marketplace-9r2d2\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:14 crc kubenswrapper[4728]: I0204 12:13:14.108212 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:14 crc kubenswrapper[4728]: I0204 12:13:14.618035 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9r2d2"] Feb 04 12:13:14 crc kubenswrapper[4728]: I0204 12:13:14.978878 4728 generic.go:334] "Generic (PLEG): container finished" podID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" containerID="57982a4dbcf3c962b5b7a7f9ee92268032732b4340f08888cfd8f9ed8a3acfa3" exitCode=0 Feb 04 12:13:14 crc kubenswrapper[4728]: I0204 12:13:14.978986 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9r2d2" event={"ID":"6f5c92f7-cc99-4acd-8ba4-7949b2b96641","Type":"ContainerDied","Data":"57982a4dbcf3c962b5b7a7f9ee92268032732b4340f08888cfd8f9ed8a3acfa3"} Feb 04 12:13:14 crc kubenswrapper[4728]: I0204 12:13:14.979233 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9r2d2" event={"ID":"6f5c92f7-cc99-4acd-8ba4-7949b2b96641","Type":"ContainerStarted","Data":"8f2f4385b157188c4d9bbfc86897bd3ed4db8445a7973c55774922e672ebf1e7"} Feb 04 12:13:14 crc kubenswrapper[4728]: I0204 12:13:14.980788 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 12:13:16 crc kubenswrapper[4728]: E0204 12:13:16.349327 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f5c92f7_cc99_4acd_8ba4_7949b2b96641.slice/crio-02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f5c92f7_cc99_4acd_8ba4_7949b2b96641.slice/crio-conmon-02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab.scope\": RecentStats: unable to find data in memory cache]" Feb 04 12:13:16 crc kubenswrapper[4728]: I0204 12:13:16.998003 4728 generic.go:334] "Generic (PLEG): container finished" podID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" containerID="02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab" exitCode=0 Feb 04 12:13:16 crc kubenswrapper[4728]: I0204 12:13:16.998053 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9r2d2" event={"ID":"6f5c92f7-cc99-4acd-8ba4-7949b2b96641","Type":"ContainerDied","Data":"02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab"} Feb 04 12:13:18 crc kubenswrapper[4728]: I0204 12:13:18.009390 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9r2d2" event={"ID":"6f5c92f7-cc99-4acd-8ba4-7949b2b96641","Type":"ContainerStarted","Data":"f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a"} Feb 04 12:13:18 crc kubenswrapper[4728]: I0204 12:13:18.034378 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9r2d2" podStartSLOduration=2.6022547940000003 podStartE2EDuration="5.034359128s" podCreationTimestamp="2026-02-04 12:13:13 +0000 UTC" firstStartedPulling="2026-02-04 12:13:14.980535157 +0000 UTC m=+2744.123239542" lastFinishedPulling="2026-02-04 12:13:17.412639461 +0000 UTC m=+2746.555343876" observedRunningTime="2026-02-04 12:13:18.028998497 +0000 UTC m=+2747.171702893" watchObservedRunningTime="2026-02-04 12:13:18.034359128 +0000 UTC m=+2747.177063513" Feb 04 12:13:24 crc kubenswrapper[4728]: I0204 12:13:24.109063 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:24 crc kubenswrapper[4728]: I0204 12:13:24.109507 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:24 crc kubenswrapper[4728]: I0204 12:13:24.228459 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:25 crc kubenswrapper[4728]: I0204 12:13:25.139247 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:25 crc kubenswrapper[4728]: I0204 12:13:25.182022 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9r2d2"] Feb 04 12:13:27 crc kubenswrapper[4728]: I0204 12:13:27.105998 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9r2d2" podUID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" containerName="registry-server" containerID="cri-o://f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a" gracePeriod=2 Feb 04 12:13:27 crc kubenswrapper[4728]: I0204 12:13:27.567694 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:27 crc kubenswrapper[4728]: I0204 12:13:27.744408 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-utilities\") pod \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " Feb 04 12:13:27 crc kubenswrapper[4728]: I0204 12:13:27.744472 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-catalog-content\") pod \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " Feb 04 12:13:27 crc kubenswrapper[4728]: I0204 12:13:27.744516 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79mv7\" (UniqueName: \"kubernetes.io/projected/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-kube-api-access-79mv7\") pod \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\" (UID: \"6f5c92f7-cc99-4acd-8ba4-7949b2b96641\") " Feb 04 12:13:27 crc kubenswrapper[4728]: I0204 12:13:27.745488 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-utilities" (OuterVolumeSpecName: "utilities") pod "6f5c92f7-cc99-4acd-8ba4-7949b2b96641" (UID: "6f5c92f7-cc99-4acd-8ba4-7949b2b96641"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:13:27 crc kubenswrapper[4728]: I0204 12:13:27.750322 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-kube-api-access-79mv7" (OuterVolumeSpecName: "kube-api-access-79mv7") pod "6f5c92f7-cc99-4acd-8ba4-7949b2b96641" (UID: "6f5c92f7-cc99-4acd-8ba4-7949b2b96641"). InnerVolumeSpecName "kube-api-access-79mv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:13:27 crc kubenswrapper[4728]: I0204 12:13:27.766774 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f5c92f7-cc99-4acd-8ba4-7949b2b96641" (UID: "6f5c92f7-cc99-4acd-8ba4-7949b2b96641"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:13:27 crc kubenswrapper[4728]: I0204 12:13:27.847295 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:13:27 crc kubenswrapper[4728]: I0204 12:13:27.847333 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:13:27 crc kubenswrapper[4728]: I0204 12:13:27.847351 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79mv7\" (UniqueName: \"kubernetes.io/projected/6f5c92f7-cc99-4acd-8ba4-7949b2b96641-kube-api-access-79mv7\") on node \"crc\" DevicePath \"\"" Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.114869 4728 generic.go:334] "Generic (PLEG): container finished" podID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" containerID="f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a" exitCode=0 Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.114956 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9r2d2" event={"ID":"6f5c92f7-cc99-4acd-8ba4-7949b2b96641","Type":"ContainerDied","Data":"f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a"} Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.114967 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9r2d2" Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.114999 4728 scope.go:117] "RemoveContainer" containerID="f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a" Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.114987 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9r2d2" event={"ID":"6f5c92f7-cc99-4acd-8ba4-7949b2b96641","Type":"ContainerDied","Data":"8f2f4385b157188c4d9bbfc86897bd3ed4db8445a7973c55774922e672ebf1e7"} Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.145479 4728 scope.go:117] "RemoveContainer" containerID="02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab" Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.149592 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9r2d2"] Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.165869 4728 scope.go:117] "RemoveContainer" containerID="57982a4dbcf3c962b5b7a7f9ee92268032732b4340f08888cfd8f9ed8a3acfa3" Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.165981 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9r2d2"] Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.220527 4728 scope.go:117] "RemoveContainer" containerID="f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a" Feb 04 12:13:28 crc kubenswrapper[4728]: E0204 12:13:28.220982 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a\": container with ID starting with f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a not found: ID does not exist" containerID="f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a" Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.221014 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a"} err="failed to get container status \"f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a\": rpc error: code = NotFound desc = could not find container \"f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a\": container with ID starting with f89d9f04ad8f674459a69ce217033c1ffbfae331f5b25bf6e5302e1364d9e89a not found: ID does not exist" Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.221035 4728 scope.go:117] "RemoveContainer" containerID="02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab" Feb 04 12:13:28 crc kubenswrapper[4728]: E0204 12:13:28.221526 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab\": container with ID starting with 02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab not found: ID does not exist" containerID="02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab" Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.221569 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab"} err="failed to get container status \"02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab\": rpc error: code = NotFound desc = could not find container \"02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab\": container with ID starting with 02a29332bc2d58991fcec4da1d2e84ae2e8156ce679ff9a7d4804a7671b6d9ab not found: ID does not exist" Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.221596 4728 scope.go:117] "RemoveContainer" containerID="57982a4dbcf3c962b5b7a7f9ee92268032732b4340f08888cfd8f9ed8a3acfa3" Feb 04 12:13:28 crc kubenswrapper[4728]: E0204 12:13:28.221868 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57982a4dbcf3c962b5b7a7f9ee92268032732b4340f08888cfd8f9ed8a3acfa3\": container with ID starting with 57982a4dbcf3c962b5b7a7f9ee92268032732b4340f08888cfd8f9ed8a3acfa3 not found: ID does not exist" containerID="57982a4dbcf3c962b5b7a7f9ee92268032732b4340f08888cfd8f9ed8a3acfa3" Feb 04 12:13:28 crc kubenswrapper[4728]: I0204 12:13:28.221905 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57982a4dbcf3c962b5b7a7f9ee92268032732b4340f08888cfd8f9ed8a3acfa3"} err="failed to get container status \"57982a4dbcf3c962b5b7a7f9ee92268032732b4340f08888cfd8f9ed8a3acfa3\": rpc error: code = NotFound desc = could not find container \"57982a4dbcf3c962b5b7a7f9ee92268032732b4340f08888cfd8f9ed8a3acfa3\": container with ID starting with 57982a4dbcf3c962b5b7a7f9ee92268032732b4340f08888cfd8f9ed8a3acfa3 not found: ID does not exist" Feb 04 12:13:29 crc kubenswrapper[4728]: I0204 12:13:29.572701 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" path="/var/lib/kubelet/pods/6f5c92f7-cc99-4acd-8ba4-7949b2b96641/volumes" Feb 04 12:14:08 crc kubenswrapper[4728]: I0204 12:14:08.859565 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q7q9w"] Feb 04 12:14:08 crc kubenswrapper[4728]: E0204 12:14:08.861731 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" containerName="extract-utilities" Feb 04 12:14:08 crc kubenswrapper[4728]: I0204 12:14:08.861895 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" containerName="extract-utilities" Feb 04 12:14:08 crc kubenswrapper[4728]: E0204 12:14:08.862010 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" containerName="extract-content" Feb 04 12:14:08 crc kubenswrapper[4728]: I0204 12:14:08.862100 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" containerName="extract-content" Feb 04 12:14:08 crc kubenswrapper[4728]: E0204 12:14:08.862183 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" containerName="registry-server" Feb 04 12:14:08 crc kubenswrapper[4728]: I0204 12:14:08.862255 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" containerName="registry-server" Feb 04 12:14:08 crc kubenswrapper[4728]: I0204 12:14:08.862548 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5c92f7-cc99-4acd-8ba4-7949b2b96641" containerName="registry-server" Feb 04 12:14:08 crc kubenswrapper[4728]: I0204 12:14:08.864980 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:08 crc kubenswrapper[4728]: I0204 12:14:08.871867 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7q9w"] Feb 04 12:14:08 crc kubenswrapper[4728]: I0204 12:14:08.947622 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-catalog-content\") pod \"community-operators-q7q9w\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:08 crc kubenswrapper[4728]: I0204 12:14:08.947683 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kprt\" (UniqueName: \"kubernetes.io/projected/394ce9e4-183e-41a6-a1e6-ae02faec8839-kube-api-access-4kprt\") pod \"community-operators-q7q9w\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:08 crc kubenswrapper[4728]: I0204 12:14:08.948008 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-utilities\") pod \"community-operators-q7q9w\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:09 crc kubenswrapper[4728]: I0204 12:14:09.049732 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-utilities\") pod \"community-operators-q7q9w\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:09 crc kubenswrapper[4728]: I0204 12:14:09.049915 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-catalog-content\") pod \"community-operators-q7q9w\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:09 crc kubenswrapper[4728]: I0204 12:14:09.049953 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kprt\" (UniqueName: \"kubernetes.io/projected/394ce9e4-183e-41a6-a1e6-ae02faec8839-kube-api-access-4kprt\") pod \"community-operators-q7q9w\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:09 crc kubenswrapper[4728]: I0204 12:14:09.050505 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-catalog-content\") pod \"community-operators-q7q9w\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:09 crc kubenswrapper[4728]: I0204 12:14:09.050519 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-utilities\") pod \"community-operators-q7q9w\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:09 crc kubenswrapper[4728]: I0204 12:14:09.087164 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kprt\" (UniqueName: \"kubernetes.io/projected/394ce9e4-183e-41a6-a1e6-ae02faec8839-kube-api-access-4kprt\") pod \"community-operators-q7q9w\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:09 crc kubenswrapper[4728]: I0204 12:14:09.201063 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:09 crc kubenswrapper[4728]: I0204 12:14:09.728250 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7q9w"] Feb 04 12:14:10 crc kubenswrapper[4728]: I0204 12:14:10.500663 4728 generic.go:334] "Generic (PLEG): container finished" podID="394ce9e4-183e-41a6-a1e6-ae02faec8839" containerID="857893b64ad3e8f593316fe43a82e8307d0218d9c1958732a4119389f021d37a" exitCode=0 Feb 04 12:14:10 crc kubenswrapper[4728]: I0204 12:14:10.500780 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7q9w" event={"ID":"394ce9e4-183e-41a6-a1e6-ae02faec8839","Type":"ContainerDied","Data":"857893b64ad3e8f593316fe43a82e8307d0218d9c1958732a4119389f021d37a"} Feb 04 12:14:10 crc kubenswrapper[4728]: I0204 12:14:10.500976 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7q9w" event={"ID":"394ce9e4-183e-41a6-a1e6-ae02faec8839","Type":"ContainerStarted","Data":"43414b252578ddf6e5f6217b97430935be15bbab12db9511ba8a2a35218f84fe"} Feb 04 12:14:11 crc kubenswrapper[4728]: I0204 12:14:11.511244 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7q9w" event={"ID":"394ce9e4-183e-41a6-a1e6-ae02faec8839","Type":"ContainerStarted","Data":"5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075"} Feb 04 12:14:12 crc kubenswrapper[4728]: I0204 12:14:12.524720 4728 generic.go:334] "Generic (PLEG): container finished" podID="394ce9e4-183e-41a6-a1e6-ae02faec8839" containerID="5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075" exitCode=0 Feb 04 12:14:12 crc kubenswrapper[4728]: I0204 12:14:12.524787 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7q9w" event={"ID":"394ce9e4-183e-41a6-a1e6-ae02faec8839","Type":"ContainerDied","Data":"5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075"} Feb 04 12:14:13 crc kubenswrapper[4728]: I0204 12:14:13.535452 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7q9w" event={"ID":"394ce9e4-183e-41a6-a1e6-ae02faec8839","Type":"ContainerStarted","Data":"e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a"} Feb 04 12:14:13 crc kubenswrapper[4728]: I0204 12:14:13.555180 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q7q9w" podStartSLOduration=2.776110953 podStartE2EDuration="5.555163998s" podCreationTimestamp="2026-02-04 12:14:08 +0000 UTC" firstStartedPulling="2026-02-04 12:14:10.504341379 +0000 UTC m=+2799.647045764" lastFinishedPulling="2026-02-04 12:14:13.283394424 +0000 UTC m=+2802.426098809" observedRunningTime="2026-02-04 12:14:13.552474803 +0000 UTC m=+2802.695179178" watchObservedRunningTime="2026-02-04 12:14:13.555163998 +0000 UTC m=+2802.697868383" Feb 04 12:14:19 crc kubenswrapper[4728]: I0204 12:14:19.201966 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:19 crc kubenswrapper[4728]: I0204 12:14:19.202388 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:19 crc kubenswrapper[4728]: I0204 12:14:19.272595 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:19 crc kubenswrapper[4728]: I0204 12:14:19.662428 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:19 crc kubenswrapper[4728]: I0204 12:14:19.713071 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7q9w"] Feb 04 12:14:21 crc kubenswrapper[4728]: I0204 12:14:21.631046 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q7q9w" podUID="394ce9e4-183e-41a6-a1e6-ae02faec8839" containerName="registry-server" containerID="cri-o://e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a" gracePeriod=2 Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.222471 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.317490 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-catalog-content\") pod \"394ce9e4-183e-41a6-a1e6-ae02faec8839\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.317629 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-utilities\") pod \"394ce9e4-183e-41a6-a1e6-ae02faec8839\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.317723 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kprt\" (UniqueName: \"kubernetes.io/projected/394ce9e4-183e-41a6-a1e6-ae02faec8839-kube-api-access-4kprt\") pod \"394ce9e4-183e-41a6-a1e6-ae02faec8839\" (UID: \"394ce9e4-183e-41a6-a1e6-ae02faec8839\") " Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.318644 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-utilities" (OuterVolumeSpecName: "utilities") pod "394ce9e4-183e-41a6-a1e6-ae02faec8839" (UID: "394ce9e4-183e-41a6-a1e6-ae02faec8839"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.323365 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/394ce9e4-183e-41a6-a1e6-ae02faec8839-kube-api-access-4kprt" (OuterVolumeSpecName: "kube-api-access-4kprt") pod "394ce9e4-183e-41a6-a1e6-ae02faec8839" (UID: "394ce9e4-183e-41a6-a1e6-ae02faec8839"). InnerVolumeSpecName "kube-api-access-4kprt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.420119 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kprt\" (UniqueName: \"kubernetes.io/projected/394ce9e4-183e-41a6-a1e6-ae02faec8839-kube-api-access-4kprt\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.420154 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.642478 4728 generic.go:334] "Generic (PLEG): container finished" podID="394ce9e4-183e-41a6-a1e6-ae02faec8839" containerID="e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a" exitCode=0 Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.642559 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7q9w" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.642580 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7q9w" event={"ID":"394ce9e4-183e-41a6-a1e6-ae02faec8839","Type":"ContainerDied","Data":"e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a"} Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.642915 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7q9w" event={"ID":"394ce9e4-183e-41a6-a1e6-ae02faec8839","Type":"ContainerDied","Data":"43414b252578ddf6e5f6217b97430935be15bbab12db9511ba8a2a35218f84fe"} Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.642964 4728 scope.go:117] "RemoveContainer" containerID="e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.667847 4728 scope.go:117] "RemoveContainer" containerID="5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.697690 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "394ce9e4-183e-41a6-a1e6-ae02faec8839" (UID: "394ce9e4-183e-41a6-a1e6-ae02faec8839"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.705352 4728 scope.go:117] "RemoveContainer" containerID="857893b64ad3e8f593316fe43a82e8307d0218d9c1958732a4119389f021d37a" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.725683 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/394ce9e4-183e-41a6-a1e6-ae02faec8839-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.743173 4728 scope.go:117] "RemoveContainer" containerID="e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a" Feb 04 12:14:22 crc kubenswrapper[4728]: E0204 12:14:22.743688 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a\": container with ID starting with e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a not found: ID does not exist" containerID="e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.743729 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a"} err="failed to get container status \"e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a\": rpc error: code = NotFound desc = could not find container \"e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a\": container with ID starting with e1d0e3cee4450cc840d2c652d85717c5da9fc32d9a74425fc83c82f71da0df1a not found: ID does not exist" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.743770 4728 scope.go:117] "RemoveContainer" containerID="5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075" Feb 04 12:14:22 crc kubenswrapper[4728]: E0204 12:14:22.744165 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075\": container with ID starting with 5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075 not found: ID does not exist" containerID="5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.744213 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075"} err="failed to get container status \"5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075\": rpc error: code = NotFound desc = could not find container \"5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075\": container with ID starting with 5ad18f4f5f483af3daa2804b989c0627cc68dc8255b1d518baa6a15548734075 not found: ID does not exist" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.744250 4728 scope.go:117] "RemoveContainer" containerID="857893b64ad3e8f593316fe43a82e8307d0218d9c1958732a4119389f021d37a" Feb 04 12:14:22 crc kubenswrapper[4728]: E0204 12:14:22.744599 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"857893b64ad3e8f593316fe43a82e8307d0218d9c1958732a4119389f021d37a\": container with ID starting with 857893b64ad3e8f593316fe43a82e8307d0218d9c1958732a4119389f021d37a not found: ID does not exist" containerID="857893b64ad3e8f593316fe43a82e8307d0218d9c1958732a4119389f021d37a" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.744624 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"857893b64ad3e8f593316fe43a82e8307d0218d9c1958732a4119389f021d37a"} err="failed to get container status \"857893b64ad3e8f593316fe43a82e8307d0218d9c1958732a4119389f021d37a\": rpc error: code = NotFound desc = could not find container \"857893b64ad3e8f593316fe43a82e8307d0218d9c1958732a4119389f021d37a\": container with ID starting with 857893b64ad3e8f593316fe43a82e8307d0218d9c1958732a4119389f021d37a not found: ID does not exist" Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.975932 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7q9w"] Feb 04 12:14:22 crc kubenswrapper[4728]: I0204 12:14:22.984269 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q7q9w"] Feb 04 12:14:23 crc kubenswrapper[4728]: I0204 12:14:23.565136 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="394ce9e4-183e-41a6-a1e6-ae02faec8839" path="/var/lib/kubelet/pods/394ce9e4-183e-41a6-a1e6-ae02faec8839/volumes" Feb 04 12:14:35 crc kubenswrapper[4728]: I0204 12:14:35.449036 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:14:35 crc kubenswrapper[4728]: I0204 12:14:35.449431 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:14:49 crc kubenswrapper[4728]: I0204 12:14:49.003350 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6dcb54f59-lnlx2_3b49d7d8-7c63-482c-b882-25c01e798afe/manager/0.log" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.818305 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.818806 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="84f425e3-ba15-437d-addf-aa2081f736b5" containerName="openstackclient" containerID="cri-o://3326601329e1332a3e753bd36ab49dd44612e85eea1e63841ba6770f31db3a67" gracePeriod=2 Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.827780 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.855431 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 04 12:14:50 crc kubenswrapper[4728]: E0204 12:14:50.855838 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f425e3-ba15-437d-addf-aa2081f736b5" containerName="openstackclient" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.855855 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f425e3-ba15-437d-addf-aa2081f736b5" containerName="openstackclient" Feb 04 12:14:50 crc kubenswrapper[4728]: E0204 12:14:50.855883 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="394ce9e4-183e-41a6-a1e6-ae02faec8839" containerName="extract-utilities" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.855889 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="394ce9e4-183e-41a6-a1e6-ae02faec8839" containerName="extract-utilities" Feb 04 12:14:50 crc kubenswrapper[4728]: E0204 12:14:50.855908 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="394ce9e4-183e-41a6-a1e6-ae02faec8839" containerName="extract-content" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.855914 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="394ce9e4-183e-41a6-a1e6-ae02faec8839" containerName="extract-content" Feb 04 12:14:50 crc kubenswrapper[4728]: E0204 12:14:50.855942 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="394ce9e4-183e-41a6-a1e6-ae02faec8839" containerName="registry-server" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.855949 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="394ce9e4-183e-41a6-a1e6-ae02faec8839" containerName="registry-server" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.856142 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="394ce9e4-183e-41a6-a1e6-ae02faec8839" containerName="registry-server" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.856163 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="84f425e3-ba15-437d-addf-aa2081f736b5" containerName="openstackclient" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.856765 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.863837 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="84f425e3-ba15-437d-addf-aa2081f736b5" podUID="319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.868121 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.904383 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 04 12:14:50 crc kubenswrapper[4728]: E0204 12:14:50.905269 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-c94kf openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.917319 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.926234 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.927726 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.949397 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.958697 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94kf\" (UniqueName: \"kubernetes.io/projected/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-kube-api-access-c94kf\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.961350 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config-secret\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.961573 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:50 crc kubenswrapper[4728]: I0204 12:14:50.961685 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.063575 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.063650 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh68c\" (UniqueName: \"kubernetes.io/projected/2607e08f-8499-4718-95b4-1377b153c155-kube-api-access-lh68c\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.063674 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.063785 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2607e08f-8499-4718-95b4-1377b153c155-openstack-config\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.063958 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94kf\" (UniqueName: \"kubernetes.io/projected/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-kube-api-access-c94kf\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.064121 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-openstack-config-secret\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.064254 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config-secret\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.064391 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.064980 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: E0204 12:14:51.066174 4728 projected.go:194] Error preparing data for projected volume kube-api-access-c94kf for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a) does not match the UID in record. The object might have been deleted and then recreated Feb 04 12:14:51 crc kubenswrapper[4728]: E0204 12:14:51.066228 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-kube-api-access-c94kf podName:319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a nodeName:}" failed. No retries permitted until 2026-02-04 12:14:51.566211928 +0000 UTC m=+2840.708916313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c94kf" (UniqueName: "kubernetes.io/projected/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-kube-api-access-c94kf") pod "openstackclient" (UID: "319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a) does not match the UID in record. The object might have been deleted and then recreated Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.069323 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.070726 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config-secret\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.167909 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh68c\" (UniqueName: \"kubernetes.io/projected/2607e08f-8499-4718-95b4-1377b153c155-kube-api-access-lh68c\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.167963 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2607e08f-8499-4718-95b4-1377b153c155-openstack-config\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.168058 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-openstack-config-secret\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.168081 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.170685 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2607e08f-8499-4718-95b4-1377b153c155-openstack-config\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.171849 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-openstack-config-secret\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.171897 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.186339 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh68c\" (UniqueName: \"kubernetes.io/projected/2607e08f-8499-4718-95b4-1377b153c155-kube-api-access-lh68c\") pod \"openstackclient\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.266768 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.576037 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94kf\" (UniqueName: \"kubernetes.io/projected/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-kube-api-access-c94kf\") pod \"openstackclient\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: E0204 12:14:51.578157 4728 projected.go:194] Error preparing data for projected volume kube-api-access-c94kf for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a) does not match the UID in record. The object might have been deleted and then recreated Feb 04 12:14:51 crc kubenswrapper[4728]: E0204 12:14:51.578251 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-kube-api-access-c94kf podName:319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a nodeName:}" failed. No retries permitted until 2026-02-04 12:14:52.57823104 +0000 UTC m=+2841.720935425 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-c94kf" (UniqueName: "kubernetes.io/projected/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-kube-api-access-c94kf") pod "openstackclient" (UID: "319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a) does not match the UID in record. The object might have been deleted and then recreated Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.801833 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.847183 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-64c95"] Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.848515 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-64c95" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.855191 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-64c95"] Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.900672 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.901386 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2607e08f-8499-4718-95b4-1377b153c155","Type":"ContainerStarted","Data":"77dc1cc6aa74afe52b62194566c7125a8001b2c2801ec8690b953550501c4392"} Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.914695 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.917451 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a" podUID="2607e08f-8499-4718-95b4-1377b153c155" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.943798 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-7a49-account-create-update-rlk6x"] Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.944962 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7a49-account-create-update-rlk6x" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.947043 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.954415 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7a49-account-create-update-rlk6x"] Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.983223 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31b7dab7-5f37-4d0a-9606-965afa056370-operator-scripts\") pod \"aodh-db-create-64c95\" (UID: \"31b7dab7-5f37-4d0a-9606-965afa056370\") " pod="openstack/aodh-db-create-64c95" Feb 04 12:14:51 crc kubenswrapper[4728]: I0204 12:14:51.983273 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkvtq\" (UniqueName: \"kubernetes.io/projected/31b7dab7-5f37-4d0a-9606-965afa056370-kube-api-access-qkvtq\") pod \"aodh-db-create-64c95\" (UID: \"31b7dab7-5f37-4d0a-9606-965afa056370\") " pod="openstack/aodh-db-create-64c95" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.084543 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-combined-ca-bundle\") pod \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.084617 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config-secret\") pod \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.084699 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config\") pod \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\" (UID: \"319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a\") " Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.085138 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp88t\" (UniqueName: \"kubernetes.io/projected/2cb1855f-7a6c-42af-ac76-95d78714517c-kube-api-access-tp88t\") pod \"aodh-7a49-account-create-update-rlk6x\" (UID: \"2cb1855f-7a6c-42af-ac76-95d78714517c\") " pod="openstack/aodh-7a49-account-create-update-rlk6x" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.085195 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb1855f-7a6c-42af-ac76-95d78714517c-operator-scripts\") pod \"aodh-7a49-account-create-update-rlk6x\" (UID: \"2cb1855f-7a6c-42af-ac76-95d78714517c\") " pod="openstack/aodh-7a49-account-create-update-rlk6x" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.085231 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31b7dab7-5f37-4d0a-9606-965afa056370-operator-scripts\") pod \"aodh-db-create-64c95\" (UID: \"31b7dab7-5f37-4d0a-9606-965afa056370\") " pod="openstack/aodh-db-create-64c95" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.085260 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkvtq\" (UniqueName: \"kubernetes.io/projected/31b7dab7-5f37-4d0a-9606-965afa056370-kube-api-access-qkvtq\") pod \"aodh-db-create-64c95\" (UID: \"31b7dab7-5f37-4d0a-9606-965afa056370\") " pod="openstack/aodh-db-create-64c95" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.085588 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c94kf\" (UniqueName: \"kubernetes.io/projected/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-kube-api-access-c94kf\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.086076 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31b7dab7-5f37-4d0a-9606-965afa056370-operator-scripts\") pod \"aodh-db-create-64c95\" (UID: \"31b7dab7-5f37-4d0a-9606-965afa056370\") " pod="openstack/aodh-db-create-64c95" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.086097 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a" (UID: "319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.091949 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a" (UID: "319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.092029 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a" (UID: "319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.106341 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkvtq\" (UniqueName: \"kubernetes.io/projected/31b7dab7-5f37-4d0a-9606-965afa056370-kube-api-access-qkvtq\") pod \"aodh-db-create-64c95\" (UID: \"31b7dab7-5f37-4d0a-9606-965afa056370\") " pod="openstack/aodh-db-create-64c95" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.175869 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-64c95" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.187138 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp88t\" (UniqueName: \"kubernetes.io/projected/2cb1855f-7a6c-42af-ac76-95d78714517c-kube-api-access-tp88t\") pod \"aodh-7a49-account-create-update-rlk6x\" (UID: \"2cb1855f-7a6c-42af-ac76-95d78714517c\") " pod="openstack/aodh-7a49-account-create-update-rlk6x" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.187204 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb1855f-7a6c-42af-ac76-95d78714517c-operator-scripts\") pod \"aodh-7a49-account-create-update-rlk6x\" (UID: \"2cb1855f-7a6c-42af-ac76-95d78714517c\") " pod="openstack/aodh-7a49-account-create-update-rlk6x" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.187356 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.187372 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.187385 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.188009 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb1855f-7a6c-42af-ac76-95d78714517c-operator-scripts\") pod \"aodh-7a49-account-create-update-rlk6x\" (UID: \"2cb1855f-7a6c-42af-ac76-95d78714517c\") " pod="openstack/aodh-7a49-account-create-update-rlk6x" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.206341 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp88t\" (UniqueName: \"kubernetes.io/projected/2cb1855f-7a6c-42af-ac76-95d78714517c-kube-api-access-tp88t\") pod \"aodh-7a49-account-create-update-rlk6x\" (UID: \"2cb1855f-7a6c-42af-ac76-95d78714517c\") " pod="openstack/aodh-7a49-account-create-update-rlk6x" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.277825 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7a49-account-create-update-rlk6x" Feb 04 12:14:52 crc kubenswrapper[4728]: W0204 12:14:52.701300 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31b7dab7_5f37_4d0a_9606_965afa056370.slice/crio-cb78d36649e7b9f457f20ccf5be87503a97faa92c08a85745b48cafa87176b81 WatchSource:0}: Error finding container cb78d36649e7b9f457f20ccf5be87503a97faa92c08a85745b48cafa87176b81: Status 404 returned error can't find the container with id cb78d36649e7b9f457f20ccf5be87503a97faa92c08a85745b48cafa87176b81 Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.704442 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-64c95"] Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.783450 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7a49-account-create-update-rlk6x"] Feb 04 12:14:52 crc kubenswrapper[4728]: W0204 12:14:52.786905 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cb1855f_7a6c_42af_ac76_95d78714517c.slice/crio-2e0a264294904b7b123d7ec86fffe87169dcdf4b78f3507f902505aa3fc60df8 WatchSource:0}: Error finding container 2e0a264294904b7b123d7ec86fffe87169dcdf4b78f3507f902505aa3fc60df8: Status 404 returned error can't find the container with id 2e0a264294904b7b123d7ec86fffe87169dcdf4b78f3507f902505aa3fc60df8 Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.910829 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-64c95" event={"ID":"31b7dab7-5f37-4d0a-9606-965afa056370","Type":"ContainerStarted","Data":"cb78d36649e7b9f457f20ccf5be87503a97faa92c08a85745b48cafa87176b81"} Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.911877 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7a49-account-create-update-rlk6x" event={"ID":"2cb1855f-7a6c-42af-ac76-95d78714517c","Type":"ContainerStarted","Data":"2e0a264294904b7b123d7ec86fffe87169dcdf4b78f3507f902505aa3fc60df8"} Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.913739 4728 generic.go:334] "Generic (PLEG): container finished" podID="84f425e3-ba15-437d-addf-aa2081f736b5" containerID="3326601329e1332a3e753bd36ab49dd44612e85eea1e63841ba6770f31db3a67" exitCode=137 Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.931293 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2607e08f-8499-4718-95b4-1377b153c155","Type":"ContainerStarted","Data":"4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3"} Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.931317 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.955266 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.955246322 podStartE2EDuration="2.955246322s" podCreationTimestamp="2026-02-04 12:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 12:14:52.953005568 +0000 UTC m=+2842.095709973" watchObservedRunningTime="2026-02-04 12:14:52.955246322 +0000 UTC m=+2842.097950707" Feb 04 12:14:52 crc kubenswrapper[4728]: I0204 12:14:52.968037 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a" podUID="2607e08f-8499-4718-95b4-1377b153c155" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.224183 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.227560 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="84f425e3-ba15-437d-addf-aa2081f736b5" podUID="2607e08f-8499-4718-95b4-1377b153c155" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.407187 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-combined-ca-bundle\") pod \"84f425e3-ba15-437d-addf-aa2081f736b5\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.407628 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g42gx\" (UniqueName: \"kubernetes.io/projected/84f425e3-ba15-437d-addf-aa2081f736b5-kube-api-access-g42gx\") pod \"84f425e3-ba15-437d-addf-aa2081f736b5\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.407661 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config-secret\") pod \"84f425e3-ba15-437d-addf-aa2081f736b5\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.407833 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config\") pod \"84f425e3-ba15-437d-addf-aa2081f736b5\" (UID: \"84f425e3-ba15-437d-addf-aa2081f736b5\") " Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.412637 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84f425e3-ba15-437d-addf-aa2081f736b5-kube-api-access-g42gx" (OuterVolumeSpecName: "kube-api-access-g42gx") pod "84f425e3-ba15-437d-addf-aa2081f736b5" (UID: "84f425e3-ba15-437d-addf-aa2081f736b5"). InnerVolumeSpecName "kube-api-access-g42gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.435024 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84f425e3-ba15-437d-addf-aa2081f736b5" (UID: "84f425e3-ba15-437d-addf-aa2081f736b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.438202 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "84f425e3-ba15-437d-addf-aa2081f736b5" (UID: "84f425e3-ba15-437d-addf-aa2081f736b5"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.466271 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "84f425e3-ba15-437d-addf-aa2081f736b5" (UID: "84f425e3-ba15-437d-addf-aa2081f736b5"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.510048 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.510086 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.510096 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g42gx\" (UniqueName: \"kubernetes.io/projected/84f425e3-ba15-437d-addf-aa2081f736b5-kube-api-access-g42gx\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.510108 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/84f425e3-ba15-437d-addf-aa2081f736b5-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.565380 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a" path="/var/lib/kubelet/pods/319e34b8-dfb5-4d3d-9ac1-9e2819cc9b7a/volumes" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.565942 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84f425e3-ba15-437d-addf-aa2081f736b5" path="/var/lib/kubelet/pods/84f425e3-ba15-437d-addf-aa2081f736b5/volumes" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.939850 4728 generic.go:334] "Generic (PLEG): container finished" podID="2cb1855f-7a6c-42af-ac76-95d78714517c" containerID="ece34e87dc6a3d7bf3f2152635badcb89a85cb2cfbb5eb86f5f0b2808974ce49" exitCode=0 Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.939919 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7a49-account-create-update-rlk6x" event={"ID":"2cb1855f-7a6c-42af-ac76-95d78714517c","Type":"ContainerDied","Data":"ece34e87dc6a3d7bf3f2152635badcb89a85cb2cfbb5eb86f5f0b2808974ce49"} Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.941291 4728 scope.go:117] "RemoveContainer" containerID="3326601329e1332a3e753bd36ab49dd44612e85eea1e63841ba6770f31db3a67" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.941389 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.950532 4728 generic.go:334] "Generic (PLEG): container finished" podID="31b7dab7-5f37-4d0a-9606-965afa056370" containerID="03808ff4a15f8b09151571b9ded8969060829c89287baad0a669654b385e8d94" exitCode=0 Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.950623 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-64c95" event={"ID":"31b7dab7-5f37-4d0a-9606-965afa056370","Type":"ContainerDied","Data":"03808ff4a15f8b09151571b9ded8969060829c89287baad0a669654b385e8d94"} Feb 04 12:14:53 crc kubenswrapper[4728]: I0204 12:14:53.969997 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="84f425e3-ba15-437d-addf-aa2081f736b5" podUID="2607e08f-8499-4718-95b4-1377b153c155" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.403900 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7a49-account-create-update-rlk6x" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.409596 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-64c95" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.564369 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31b7dab7-5f37-4d0a-9606-965afa056370-operator-scripts\") pod \"31b7dab7-5f37-4d0a-9606-965afa056370\" (UID: \"31b7dab7-5f37-4d0a-9606-965afa056370\") " Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.565073 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31b7dab7-5f37-4d0a-9606-965afa056370-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31b7dab7-5f37-4d0a-9606-965afa056370" (UID: "31b7dab7-5f37-4d0a-9606-965afa056370"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.565213 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp88t\" (UniqueName: \"kubernetes.io/projected/2cb1855f-7a6c-42af-ac76-95d78714517c-kube-api-access-tp88t\") pod \"2cb1855f-7a6c-42af-ac76-95d78714517c\" (UID: \"2cb1855f-7a6c-42af-ac76-95d78714517c\") " Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.565279 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb1855f-7a6c-42af-ac76-95d78714517c-operator-scripts\") pod \"2cb1855f-7a6c-42af-ac76-95d78714517c\" (UID: \"2cb1855f-7a6c-42af-ac76-95d78714517c\") " Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.565309 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkvtq\" (UniqueName: \"kubernetes.io/projected/31b7dab7-5f37-4d0a-9606-965afa056370-kube-api-access-qkvtq\") pod \"31b7dab7-5f37-4d0a-9606-965afa056370\" (UID: \"31b7dab7-5f37-4d0a-9606-965afa056370\") " Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.565886 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb1855f-7a6c-42af-ac76-95d78714517c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cb1855f-7a6c-42af-ac76-95d78714517c" (UID: "2cb1855f-7a6c-42af-ac76-95d78714517c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.567140 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cb1855f-7a6c-42af-ac76-95d78714517c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.567178 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31b7dab7-5f37-4d0a-9606-965afa056370-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.571108 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb1855f-7a6c-42af-ac76-95d78714517c-kube-api-access-tp88t" (OuterVolumeSpecName: "kube-api-access-tp88t") pod "2cb1855f-7a6c-42af-ac76-95d78714517c" (UID: "2cb1855f-7a6c-42af-ac76-95d78714517c"). InnerVolumeSpecName "kube-api-access-tp88t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.571127 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b7dab7-5f37-4d0a-9606-965afa056370-kube-api-access-qkvtq" (OuterVolumeSpecName: "kube-api-access-qkvtq") pod "31b7dab7-5f37-4d0a-9606-965afa056370" (UID: "31b7dab7-5f37-4d0a-9606-965afa056370"). InnerVolumeSpecName "kube-api-access-qkvtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.668043 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp88t\" (UniqueName: \"kubernetes.io/projected/2cb1855f-7a6c-42af-ac76-95d78714517c-kube-api-access-tp88t\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.668078 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkvtq\" (UniqueName: \"kubernetes.io/projected/31b7dab7-5f37-4d0a-9606-965afa056370-kube-api-access-qkvtq\") on node \"crc\" DevicePath \"\"" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.972332 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7a49-account-create-update-rlk6x" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.972316 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7a49-account-create-update-rlk6x" event={"ID":"2cb1855f-7a6c-42af-ac76-95d78714517c","Type":"ContainerDied","Data":"2e0a264294904b7b123d7ec86fffe87169dcdf4b78f3507f902505aa3fc60df8"} Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.972460 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e0a264294904b7b123d7ec86fffe87169dcdf4b78f3507f902505aa3fc60df8" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.973850 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-64c95" event={"ID":"31b7dab7-5f37-4d0a-9606-965afa056370","Type":"ContainerDied","Data":"cb78d36649e7b9f457f20ccf5be87503a97faa92c08a85745b48cafa87176b81"} Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.973879 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb78d36649e7b9f457f20ccf5be87503a97faa92c08a85745b48cafa87176b81" Feb 04 12:14:55 crc kubenswrapper[4728]: I0204 12:14:55.973912 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-64c95" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.166362 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm"] Feb 04 12:15:00 crc kubenswrapper[4728]: E0204 12:15:00.167433 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb1855f-7a6c-42af-ac76-95d78714517c" containerName="mariadb-account-create-update" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.167449 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb1855f-7a6c-42af-ac76-95d78714517c" containerName="mariadb-account-create-update" Feb 04 12:15:00 crc kubenswrapper[4728]: E0204 12:15:00.167475 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b7dab7-5f37-4d0a-9606-965afa056370" containerName="mariadb-database-create" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.167490 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b7dab7-5f37-4d0a-9606-965afa056370" containerName="mariadb-database-create" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.167672 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb1855f-7a6c-42af-ac76-95d78714517c" containerName="mariadb-account-create-update" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.167698 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b7dab7-5f37-4d0a-9606-965afa056370" containerName="mariadb-database-create" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.168384 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.175785 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.177050 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.180825 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm"] Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.357337 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwnl7\" (UniqueName: \"kubernetes.io/projected/249007d1-7133-4a8d-8e92-a2f66427d356-kube-api-access-fwnl7\") pod \"collect-profiles-29503455-h6wxm\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.357522 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/249007d1-7133-4a8d-8e92-a2f66427d356-config-volume\") pod \"collect-profiles-29503455-h6wxm\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.357630 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/249007d1-7133-4a8d-8e92-a2f66427d356-secret-volume\") pod \"collect-profiles-29503455-h6wxm\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.459726 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/249007d1-7133-4a8d-8e92-a2f66427d356-config-volume\") pod \"collect-profiles-29503455-h6wxm\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.460215 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/249007d1-7133-4a8d-8e92-a2f66427d356-secret-volume\") pod \"collect-profiles-29503455-h6wxm\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.460277 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwnl7\" (UniqueName: \"kubernetes.io/projected/249007d1-7133-4a8d-8e92-a2f66427d356-kube-api-access-fwnl7\") pod \"collect-profiles-29503455-h6wxm\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.460693 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/249007d1-7133-4a8d-8e92-a2f66427d356-config-volume\") pod \"collect-profiles-29503455-h6wxm\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.468527 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/249007d1-7133-4a8d-8e92-a2f66427d356-secret-volume\") pod \"collect-profiles-29503455-h6wxm\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.481974 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwnl7\" (UniqueName: \"kubernetes.io/projected/249007d1-7133-4a8d-8e92-a2f66427d356-kube-api-access-fwnl7\") pod \"collect-profiles-29503455-h6wxm\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.492822 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:00 crc kubenswrapper[4728]: I0204 12:15:00.754220 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm"] Feb 04 12:15:01 crc kubenswrapper[4728]: I0204 12:15:01.036992 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" event={"ID":"249007d1-7133-4a8d-8e92-a2f66427d356","Type":"ContainerStarted","Data":"c915020126cd238e8ecba75fdee1c0c8c1f468690e84dcd9d8fbbf405fb1bf51"} Feb 04 12:15:01 crc kubenswrapper[4728]: I0204 12:15:01.037050 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" event={"ID":"249007d1-7133-4a8d-8e92-a2f66427d356","Type":"ContainerStarted","Data":"56543ab30baf4708505935cf7d8be6ac2fde6c61df4e5c90c6a22f864c91fe93"} Feb 04 12:15:01 crc kubenswrapper[4728]: I0204 12:15:01.061491 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" podStartSLOduration=1.061471967 podStartE2EDuration="1.061471967s" podCreationTimestamp="2026-02-04 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 12:15:01.059639962 +0000 UTC m=+2850.202344347" watchObservedRunningTime="2026-02-04 12:15:01.061471967 +0000 UTC m=+2850.204176352" Feb 04 12:15:02 crc kubenswrapper[4728]: I0204 12:15:02.050112 4728 generic.go:334] "Generic (PLEG): container finished" podID="249007d1-7133-4a8d-8e92-a2f66427d356" containerID="c915020126cd238e8ecba75fdee1c0c8c1f468690e84dcd9d8fbbf405fb1bf51" exitCode=0 Feb 04 12:15:02 crc kubenswrapper[4728]: I0204 12:15:02.050182 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" event={"ID":"249007d1-7133-4a8d-8e92-a2f66427d356","Type":"ContainerDied","Data":"c915020126cd238e8ecba75fdee1c0c8c1f468690e84dcd9d8fbbf405fb1bf51"} Feb 04 12:15:03 crc kubenswrapper[4728]: I0204 12:15:03.383851 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:03 crc kubenswrapper[4728]: I0204 12:15:03.525662 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/249007d1-7133-4a8d-8e92-a2f66427d356-secret-volume\") pod \"249007d1-7133-4a8d-8e92-a2f66427d356\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " Feb 04 12:15:03 crc kubenswrapper[4728]: I0204 12:15:03.525787 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/249007d1-7133-4a8d-8e92-a2f66427d356-config-volume\") pod \"249007d1-7133-4a8d-8e92-a2f66427d356\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " Feb 04 12:15:03 crc kubenswrapper[4728]: I0204 12:15:03.526084 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwnl7\" (UniqueName: \"kubernetes.io/projected/249007d1-7133-4a8d-8e92-a2f66427d356-kube-api-access-fwnl7\") pod \"249007d1-7133-4a8d-8e92-a2f66427d356\" (UID: \"249007d1-7133-4a8d-8e92-a2f66427d356\") " Feb 04 12:15:03 crc kubenswrapper[4728]: I0204 12:15:03.526820 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/249007d1-7133-4a8d-8e92-a2f66427d356-config-volume" (OuterVolumeSpecName: "config-volume") pod "249007d1-7133-4a8d-8e92-a2f66427d356" (UID: "249007d1-7133-4a8d-8e92-a2f66427d356"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:15:03 crc kubenswrapper[4728]: I0204 12:15:03.531808 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/249007d1-7133-4a8d-8e92-a2f66427d356-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "249007d1-7133-4a8d-8e92-a2f66427d356" (UID: "249007d1-7133-4a8d-8e92-a2f66427d356"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:15:03 crc kubenswrapper[4728]: I0204 12:15:03.531991 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/249007d1-7133-4a8d-8e92-a2f66427d356-kube-api-access-fwnl7" (OuterVolumeSpecName: "kube-api-access-fwnl7") pod "249007d1-7133-4a8d-8e92-a2f66427d356" (UID: "249007d1-7133-4a8d-8e92-a2f66427d356"). InnerVolumeSpecName "kube-api-access-fwnl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:15:03 crc kubenswrapper[4728]: I0204 12:15:03.629119 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/249007d1-7133-4a8d-8e92-a2f66427d356-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 12:15:03 crc kubenswrapper[4728]: I0204 12:15:03.629426 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/249007d1-7133-4a8d-8e92-a2f66427d356-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 12:15:03 crc kubenswrapper[4728]: I0204 12:15:03.629530 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwnl7\" (UniqueName: \"kubernetes.io/projected/249007d1-7133-4a8d-8e92-a2f66427d356-kube-api-access-fwnl7\") on node \"crc\" DevicePath \"\"" Feb 04 12:15:04 crc kubenswrapper[4728]: I0204 12:15:04.069622 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" event={"ID":"249007d1-7133-4a8d-8e92-a2f66427d356","Type":"ContainerDied","Data":"56543ab30baf4708505935cf7d8be6ac2fde6c61df4e5c90c6a22f864c91fe93"} Feb 04 12:15:04 crc kubenswrapper[4728]: I0204 12:15:04.069667 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56543ab30baf4708505935cf7d8be6ac2fde6c61df4e5c90c6a22f864c91fe93" Feb 04 12:15:04 crc kubenswrapper[4728]: I0204 12:15:04.069714 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503455-h6wxm" Feb 04 12:15:04 crc kubenswrapper[4728]: I0204 12:15:04.460228 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c"] Feb 04 12:15:04 crc kubenswrapper[4728]: I0204 12:15:04.469611 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503410-lkj8c"] Feb 04 12:15:05 crc kubenswrapper[4728]: I0204 12:15:05.448409 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:15:05 crc kubenswrapper[4728]: I0204 12:15:05.448625 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:15:05 crc kubenswrapper[4728]: I0204 12:15:05.564409 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d800f63b-2465-4553-aa78-99fff8f484bb" path="/var/lib/kubelet/pods/d800f63b-2465-4553-aa78-99fff8f484bb/volumes" Feb 04 12:15:35 crc kubenswrapper[4728]: I0204 12:15:35.448002 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:15:35 crc kubenswrapper[4728]: I0204 12:15:35.450362 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:15:35 crc kubenswrapper[4728]: I0204 12:15:35.450622 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 12:15:35 crc kubenswrapper[4728]: I0204 12:15:35.451935 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 12:15:35 crc kubenswrapper[4728]: I0204 12:15:35.452285 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" gracePeriod=600 Feb 04 12:15:35 crc kubenswrapper[4728]: E0204 12:15:35.573514 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:15:36 crc kubenswrapper[4728]: I0204 12:15:36.413962 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" exitCode=0 Feb 04 12:15:36 crc kubenswrapper[4728]: I0204 12:15:36.414009 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409"} Feb 04 12:15:36 crc kubenswrapper[4728]: I0204 12:15:36.414301 4728 scope.go:117] "RemoveContainer" containerID="050fd42d478ece55fcb94795bb9bece6d44a8e884401343e87b6b7c0856343b1" Feb 04 12:15:36 crc kubenswrapper[4728]: I0204 12:15:36.414974 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:15:36 crc kubenswrapper[4728]: E0204 12:15:36.415318 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:15:40 crc kubenswrapper[4728]: I0204 12:15:40.116278 4728 scope.go:117] "RemoveContainer" containerID="a390d0391a5f04f1b39a8b6170ce85f7bb4f9cc0e457acdaafb0ea159f620355" Feb 04 12:15:48 crc kubenswrapper[4728]: I0204 12:15:48.554703 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:15:48 crc kubenswrapper[4728]: E0204 12:15:48.555965 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:16:00 crc kubenswrapper[4728]: I0204 12:16:00.554033 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:16:00 crc kubenswrapper[4728]: E0204 12:16:00.554832 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:16:15 crc kubenswrapper[4728]: I0204 12:16:15.553706 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:16:15 crc kubenswrapper[4728]: E0204 12:16:15.554440 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:16:29 crc kubenswrapper[4728]: I0204 12:16:29.554025 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:16:29 crc kubenswrapper[4728]: E0204 12:16:29.554768 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:16:41 crc kubenswrapper[4728]: I0204 12:16:41.560238 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:16:41 crc kubenswrapper[4728]: E0204 12:16:41.561028 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:16:52 crc kubenswrapper[4728]: I0204 12:16:52.553776 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:16:52 crc kubenswrapper[4728]: E0204 12:16:52.554473 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:17:06 crc kubenswrapper[4728]: I0204 12:17:06.554055 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:17:06 crc kubenswrapper[4728]: E0204 12:17:06.554912 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:17:18 crc kubenswrapper[4728]: I0204 12:17:18.553911 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:17:18 crc kubenswrapper[4728]: E0204 12:17:18.554682 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:17:33 crc kubenswrapper[4728]: I0204 12:17:33.553928 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:17:33 crc kubenswrapper[4728]: E0204 12:17:33.554868 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:17:45 crc kubenswrapper[4728]: I0204 12:17:45.554472 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:17:45 crc kubenswrapper[4728]: E0204 12:17:45.555414 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:18:00 crc kubenswrapper[4728]: I0204 12:18:00.553457 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:18:00 crc kubenswrapper[4728]: E0204 12:18:00.554323 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:18:15 crc kubenswrapper[4728]: I0204 12:18:15.554172 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:18:15 crc kubenswrapper[4728]: E0204 12:18:15.554942 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:18:29 crc kubenswrapper[4728]: I0204 12:18:29.554486 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:18:29 crc kubenswrapper[4728]: E0204 12:18:29.555622 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:18:42 crc kubenswrapper[4728]: I0204 12:18:42.554564 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:18:42 crc kubenswrapper[4728]: E0204 12:18:42.555518 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.223412 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wz95m"] Feb 04 12:18:49 crc kubenswrapper[4728]: E0204 12:18:49.224405 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="249007d1-7133-4a8d-8e92-a2f66427d356" containerName="collect-profiles" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.224419 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="249007d1-7133-4a8d-8e92-a2f66427d356" containerName="collect-profiles" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.224644 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="249007d1-7133-4a8d-8e92-a2f66427d356" containerName="collect-profiles" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.226281 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.244949 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wz95m"] Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.393933 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-utilities\") pod \"certified-operators-wz95m\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.394239 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzq8z\" (UniqueName: \"kubernetes.io/projected/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-kube-api-access-xzq8z\") pod \"certified-operators-wz95m\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.394358 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-catalog-content\") pod \"certified-operators-wz95m\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.496605 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzq8z\" (UniqueName: \"kubernetes.io/projected/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-kube-api-access-xzq8z\") pod \"certified-operators-wz95m\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.496972 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-catalog-content\") pod \"certified-operators-wz95m\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.497053 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-utilities\") pod \"certified-operators-wz95m\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.498187 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-utilities\") pod \"certified-operators-wz95m\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.498202 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-catalog-content\") pod \"certified-operators-wz95m\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.522830 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzq8z\" (UniqueName: \"kubernetes.io/projected/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-kube-api-access-xzq8z\") pod \"certified-operators-wz95m\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:49 crc kubenswrapper[4728]: I0204 12:18:49.587289 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:50 crc kubenswrapper[4728]: I0204 12:18:50.045288 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wz95m"] Feb 04 12:18:50 crc kubenswrapper[4728]: I0204 12:18:50.151922 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz95m" event={"ID":"c6ab8ebc-1cad-4aa9-81f6-b350114820a1","Type":"ContainerStarted","Data":"25ce87d51f76ae7d1a417fa8747779e291f1c52c42047673fc891b4a4edb02bc"} Feb 04 12:18:51 crc kubenswrapper[4728]: I0204 12:18:51.162703 4728 generic.go:334] "Generic (PLEG): container finished" podID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" containerID="69f94ecf9da5f4a6f430105adb25c3f7f3f0ae58b063ec75e1edd72146d05607" exitCode=0 Feb 04 12:18:51 crc kubenswrapper[4728]: I0204 12:18:51.162903 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz95m" event={"ID":"c6ab8ebc-1cad-4aa9-81f6-b350114820a1","Type":"ContainerDied","Data":"69f94ecf9da5f4a6f430105adb25c3f7f3f0ae58b063ec75e1edd72146d05607"} Feb 04 12:18:51 crc kubenswrapper[4728]: I0204 12:18:51.165247 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 12:18:52 crc kubenswrapper[4728]: I0204 12:18:52.789246 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6dcb54f59-lnlx2_3b49d7d8-7c63-482c-b882-25c01e798afe/manager/0.log" Feb 04 12:18:53 crc kubenswrapper[4728]: I0204 12:18:53.187040 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz95m" event={"ID":"c6ab8ebc-1cad-4aa9-81f6-b350114820a1","Type":"ContainerStarted","Data":"68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87"} Feb 04 12:18:54 crc kubenswrapper[4728]: I0204 12:18:54.196466 4728 generic.go:334] "Generic (PLEG): container finished" podID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" containerID="68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87" exitCode=0 Feb 04 12:18:54 crc kubenswrapper[4728]: I0204 12:18:54.196509 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz95m" event={"ID":"c6ab8ebc-1cad-4aa9-81f6-b350114820a1","Type":"ContainerDied","Data":"68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87"} Feb 04 12:18:55 crc kubenswrapper[4728]: I0204 12:18:55.554221 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:18:55 crc kubenswrapper[4728]: E0204 12:18:55.555081 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:18:56 crc kubenswrapper[4728]: I0204 12:18:56.218715 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz95m" event={"ID":"c6ab8ebc-1cad-4aa9-81f6-b350114820a1","Type":"ContainerStarted","Data":"428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c"} Feb 04 12:18:56 crc kubenswrapper[4728]: I0204 12:18:56.242073 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wz95m" podStartSLOduration=3.239411351 podStartE2EDuration="7.242054716s" podCreationTimestamp="2026-02-04 12:18:49 +0000 UTC" firstStartedPulling="2026-02-04 12:18:51.164960482 +0000 UTC m=+3080.307664867" lastFinishedPulling="2026-02-04 12:18:55.167603847 +0000 UTC m=+3084.310308232" observedRunningTime="2026-02-04 12:18:56.240648152 +0000 UTC m=+3085.383352547" watchObservedRunningTime="2026-02-04 12:18:56.242054716 +0000 UTC m=+3085.384759101" Feb 04 12:18:59 crc kubenswrapper[4728]: I0204 12:18:59.587444 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:59 crc kubenswrapper[4728]: I0204 12:18:59.588249 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:18:59 crc kubenswrapper[4728]: I0204 12:18:59.637271 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:19:00 crc kubenswrapper[4728]: I0204 12:19:00.293273 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:19:00 crc kubenswrapper[4728]: I0204 12:19:00.343330 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wz95m"] Feb 04 12:19:02 crc kubenswrapper[4728]: I0204 12:19:02.279304 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wz95m" podUID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" containerName="registry-server" containerID="cri-o://428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c" gracePeriod=2 Feb 04 12:19:02 crc kubenswrapper[4728]: I0204 12:19:02.736454 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:19:02 crc kubenswrapper[4728]: I0204 12:19:02.857508 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzq8z\" (UniqueName: \"kubernetes.io/projected/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-kube-api-access-xzq8z\") pod \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " Feb 04 12:19:02 crc kubenswrapper[4728]: I0204 12:19:02.857611 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-catalog-content\") pod \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " Feb 04 12:19:02 crc kubenswrapper[4728]: I0204 12:19:02.857776 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-utilities\") pod \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\" (UID: \"c6ab8ebc-1cad-4aa9-81f6-b350114820a1\") " Feb 04 12:19:02 crc kubenswrapper[4728]: I0204 12:19:02.859030 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-utilities" (OuterVolumeSpecName: "utilities") pod "c6ab8ebc-1cad-4aa9-81f6-b350114820a1" (UID: "c6ab8ebc-1cad-4aa9-81f6-b350114820a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:19:02 crc kubenswrapper[4728]: I0204 12:19:02.863245 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-kube-api-access-xzq8z" (OuterVolumeSpecName: "kube-api-access-xzq8z") pod "c6ab8ebc-1cad-4aa9-81f6-b350114820a1" (UID: "c6ab8ebc-1cad-4aa9-81f6-b350114820a1"). InnerVolumeSpecName "kube-api-access-xzq8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:19:02 crc kubenswrapper[4728]: I0204 12:19:02.918119 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6ab8ebc-1cad-4aa9-81f6-b350114820a1" (UID: "c6ab8ebc-1cad-4aa9-81f6-b350114820a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:19:02 crc kubenswrapper[4728]: I0204 12:19:02.959648 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzq8z\" (UniqueName: \"kubernetes.io/projected/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-kube-api-access-xzq8z\") on node \"crc\" DevicePath \"\"" Feb 04 12:19:02 crc kubenswrapper[4728]: I0204 12:19:02.959686 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:19:02 crc kubenswrapper[4728]: I0204 12:19:02.959697 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6ab8ebc-1cad-4aa9-81f6-b350114820a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.289342 4728 generic.go:334] "Generic (PLEG): container finished" podID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" containerID="428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c" exitCode=0 Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.289407 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz95m" Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.289402 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz95m" event={"ID":"c6ab8ebc-1cad-4aa9-81f6-b350114820a1","Type":"ContainerDied","Data":"428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c"} Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.289679 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz95m" event={"ID":"c6ab8ebc-1cad-4aa9-81f6-b350114820a1","Type":"ContainerDied","Data":"25ce87d51f76ae7d1a417fa8747779e291f1c52c42047673fc891b4a4edb02bc"} Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.289705 4728 scope.go:117] "RemoveContainer" containerID="428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c" Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.312164 4728 scope.go:117] "RemoveContainer" containerID="68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87" Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.331244 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wz95m"] Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.342349 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wz95m"] Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.352378 4728 scope.go:117] "RemoveContainer" containerID="69f94ecf9da5f4a6f430105adb25c3f7f3f0ae58b063ec75e1edd72146d05607" Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.382597 4728 scope.go:117] "RemoveContainer" containerID="428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c" Feb 04 12:19:03 crc kubenswrapper[4728]: E0204 12:19:03.383212 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c\": container with ID starting with 428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c not found: ID does not exist" containerID="428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c" Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.383293 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c"} err="failed to get container status \"428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c\": rpc error: code = NotFound desc = could not find container \"428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c\": container with ID starting with 428875f6d2571e6bf1823e834bedfb0b5b81823232238f74481d8804e3534f0c not found: ID does not exist" Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.383456 4728 scope.go:117] "RemoveContainer" containerID="68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87" Feb 04 12:19:03 crc kubenswrapper[4728]: E0204 12:19:03.384050 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87\": container with ID starting with 68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87 not found: ID does not exist" containerID="68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87" Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.384081 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87"} err="failed to get container status \"68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87\": rpc error: code = NotFound desc = could not find container \"68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87\": container with ID starting with 68e52e251df6851f7b8483abd5d314e12c648378582be0b12c9e721427c9de87 not found: ID does not exist" Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.384106 4728 scope.go:117] "RemoveContainer" containerID="69f94ecf9da5f4a6f430105adb25c3f7f3f0ae58b063ec75e1edd72146d05607" Feb 04 12:19:03 crc kubenswrapper[4728]: E0204 12:19:03.384463 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f94ecf9da5f4a6f430105adb25c3f7f3f0ae58b063ec75e1edd72146d05607\": container with ID starting with 69f94ecf9da5f4a6f430105adb25c3f7f3f0ae58b063ec75e1edd72146d05607 not found: ID does not exist" containerID="69f94ecf9da5f4a6f430105adb25c3f7f3f0ae58b063ec75e1edd72146d05607" Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.384512 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f94ecf9da5f4a6f430105adb25c3f7f3f0ae58b063ec75e1edd72146d05607"} err="failed to get container status \"69f94ecf9da5f4a6f430105adb25c3f7f3f0ae58b063ec75e1edd72146d05607\": rpc error: code = NotFound desc = could not find container \"69f94ecf9da5f4a6f430105adb25c3f7f3f0ae58b063ec75e1edd72146d05607\": container with ID starting with 69f94ecf9da5f4a6f430105adb25c3f7f3f0ae58b063ec75e1edd72146d05607 not found: ID does not exist" Feb 04 12:19:03 crc kubenswrapper[4728]: I0204 12:19:03.564701 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" path="/var/lib/kubelet/pods/c6ab8ebc-1cad-4aa9-81f6-b350114820a1/volumes" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.721453 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp"] Feb 04 12:19:07 crc kubenswrapper[4728]: E0204 12:19:07.722307 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" containerName="registry-server" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.722320 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" containerName="registry-server" Feb 04 12:19:07 crc kubenswrapper[4728]: E0204 12:19:07.722332 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" containerName="extract-content" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.722338 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" containerName="extract-content" Feb 04 12:19:07 crc kubenswrapper[4728]: E0204 12:19:07.722365 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" containerName="extract-utilities" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.722372 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" containerName="extract-utilities" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.722573 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ab8ebc-1cad-4aa9-81f6-b350114820a1" containerName="registry-server" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.724148 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.726693 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.733170 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp"] Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.869865 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.869942 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.870210 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz9fv\" (UniqueName: \"kubernetes.io/projected/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-kube-api-access-nz9fv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.971505 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz9fv\" (UniqueName: \"kubernetes.io/projected/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-kube-api-access-nz9fv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.971579 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.971600 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.972110 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.972258 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:07 crc kubenswrapper[4728]: I0204 12:19:07.997733 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz9fv\" (UniqueName: \"kubernetes.io/projected/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-kube-api-access-nz9fv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:08 crc kubenswrapper[4728]: I0204 12:19:08.108478 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:08 crc kubenswrapper[4728]: I0204 12:19:08.528321 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp"] Feb 04 12:19:08 crc kubenswrapper[4728]: W0204 12:19:08.542260 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00be6ffa_f38d_4321_bc01_9f7fe4d03ed6.slice/crio-e22a2837d965e0744ed8e4151866f3cd7e602551d3620c270758fe908a24d9ad WatchSource:0}: Error finding container e22a2837d965e0744ed8e4151866f3cd7e602551d3620c270758fe908a24d9ad: Status 404 returned error can't find the container with id e22a2837d965e0744ed8e4151866f3cd7e602551d3620c270758fe908a24d9ad Feb 04 12:19:09 crc kubenswrapper[4728]: I0204 12:19:09.346370 4728 generic.go:334] "Generic (PLEG): container finished" podID="00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" containerID="3b6d56381761e59e9727d7d6c1daa3058c2b171e0575f602f29a673a1e65dfdd" exitCode=0 Feb 04 12:19:09 crc kubenswrapper[4728]: I0204 12:19:09.346490 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" event={"ID":"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6","Type":"ContainerDied","Data":"3b6d56381761e59e9727d7d6c1daa3058c2b171e0575f602f29a673a1e65dfdd"} Feb 04 12:19:09 crc kubenswrapper[4728]: I0204 12:19:09.346677 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" event={"ID":"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6","Type":"ContainerStarted","Data":"e22a2837d965e0744ed8e4151866f3cd7e602551d3620c270758fe908a24d9ad"} Feb 04 12:19:10 crc kubenswrapper[4728]: I0204 12:19:10.554424 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:19:10 crc kubenswrapper[4728]: E0204 12:19:10.554860 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:19:12 crc kubenswrapper[4728]: I0204 12:19:12.374988 4728 generic.go:334] "Generic (PLEG): container finished" podID="00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" containerID="5552b450f808e99126fe063d10d2529efd21cc271880b5a999d4c0a16e97fd74" exitCode=0 Feb 04 12:19:12 crc kubenswrapper[4728]: I0204 12:19:12.375040 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" event={"ID":"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6","Type":"ContainerDied","Data":"5552b450f808e99126fe063d10d2529efd21cc271880b5a999d4c0a16e97fd74"} Feb 04 12:19:13 crc kubenswrapper[4728]: I0204 12:19:13.385228 4728 generic.go:334] "Generic (PLEG): container finished" podID="00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" containerID="c656b6a387a3db7f1f1adc668169f4314b1901132ab1d195cf7c0fa8e7c67aad" exitCode=0 Feb 04 12:19:13 crc kubenswrapper[4728]: I0204 12:19:13.385276 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" event={"ID":"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6","Type":"ContainerDied","Data":"c656b6a387a3db7f1f1adc668169f4314b1901132ab1d195cf7c0fa8e7c67aad"} Feb 04 12:19:14 crc kubenswrapper[4728]: I0204 12:19:14.728373 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:14 crc kubenswrapper[4728]: I0204 12:19:14.910815 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-bundle\") pod \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " Feb 04 12:19:14 crc kubenswrapper[4728]: I0204 12:19:14.911322 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-util\") pod \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " Feb 04 12:19:14 crc kubenswrapper[4728]: I0204 12:19:14.911502 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz9fv\" (UniqueName: \"kubernetes.io/projected/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-kube-api-access-nz9fv\") pod \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\" (UID: \"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6\") " Feb 04 12:19:14 crc kubenswrapper[4728]: I0204 12:19:14.914304 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-bundle" (OuterVolumeSpecName: "bundle") pod "00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" (UID: "00be6ffa-f38d-4321-bc01-9f7fe4d03ed6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:19:14 crc kubenswrapper[4728]: I0204 12:19:14.918823 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-kube-api-access-nz9fv" (OuterVolumeSpecName: "kube-api-access-nz9fv") pod "00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" (UID: "00be6ffa-f38d-4321-bc01-9f7fe4d03ed6"). InnerVolumeSpecName "kube-api-access-nz9fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:19:14 crc kubenswrapper[4728]: I0204 12:19:14.924487 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-util" (OuterVolumeSpecName: "util") pod "00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" (UID: "00be6ffa-f38d-4321-bc01-9f7fe4d03ed6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:19:15 crc kubenswrapper[4728]: I0204 12:19:15.013859 4728 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-util\") on node \"crc\" DevicePath \"\"" Feb 04 12:19:15 crc kubenswrapper[4728]: I0204 12:19:15.013897 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz9fv\" (UniqueName: \"kubernetes.io/projected/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-kube-api-access-nz9fv\") on node \"crc\" DevicePath \"\"" Feb 04 12:19:15 crc kubenswrapper[4728]: I0204 12:19:15.013932 4728 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00be6ffa-f38d-4321-bc01-9f7fe4d03ed6-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:19:15 crc kubenswrapper[4728]: I0204 12:19:15.405239 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" event={"ID":"00be6ffa-f38d-4321-bc01-9f7fe4d03ed6","Type":"ContainerDied","Data":"e22a2837d965e0744ed8e4151866f3cd7e602551d3620c270758fe908a24d9ad"} Feb 04 12:19:15 crc kubenswrapper[4728]: I0204 12:19:15.405308 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp" Feb 04 12:19:15 crc kubenswrapper[4728]: I0204 12:19:15.405318 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22a2837d965e0744ed8e4151866f3cd7e602551d3620c270758fe908a24d9ad" Feb 04 12:19:25 crc kubenswrapper[4728]: I0204 12:19:25.554032 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:19:25 crc kubenswrapper[4728]: E0204 12:19:25.554972 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.117903 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-pld8h"] Feb 04 12:19:27 crc kubenswrapper[4728]: E0204 12:19:27.118767 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" containerName="extract" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.118798 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" containerName="extract" Feb 04 12:19:27 crc kubenswrapper[4728]: E0204 12:19:27.118824 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" containerName="util" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.118833 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" containerName="util" Feb 04 12:19:27 crc kubenswrapper[4728]: E0204 12:19:27.118846 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" containerName="pull" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.118855 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" containerName="pull" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.119114 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="00be6ffa-f38d-4321-bc01-9f7fe4d03ed6" containerName="extract" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.119995 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pld8h" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.123594 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-dx2pm" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.123815 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.124006 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.135309 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-pld8h"] Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.242091 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr"] Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.243646 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.252250 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.252743 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-8ptdb" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.273362 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx795\" (UniqueName: \"kubernetes.io/projected/9679f4b5-ea5b-4998-92e0-08fd965f9b7f-kube-api-access-dx795\") pod \"obo-prometheus-operator-68bc856cb9-pld8h\" (UID: \"9679f4b5-ea5b-4998-92e0-08fd965f9b7f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pld8h" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.273487 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6"] Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.275068 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.285105 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr"] Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.331845 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6"] Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.379255 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx795\" (UniqueName: \"kubernetes.io/projected/9679f4b5-ea5b-4998-92e0-08fd965f9b7f-kube-api-access-dx795\") pod \"obo-prometheus-operator-68bc856cb9-pld8h\" (UID: \"9679f4b5-ea5b-4998-92e0-08fd965f9b7f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pld8h" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.379394 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/39c2f459-1049-49f1-9010-39b354d6f9e9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr\" (UID: \"39c2f459-1049-49f1-9010-39b354d6f9e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.379655 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d8e0076-8a70-44f0-a7c4-25c1a70a1e89-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6\" (UID: \"6d8e0076-8a70-44f0-a7c4-25c1a70a1e89\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.379781 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39c2f459-1049-49f1-9010-39b354d6f9e9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr\" (UID: \"39c2f459-1049-49f1-9010-39b354d6f9e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.379817 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d8e0076-8a70-44f0-a7c4-25c1a70a1e89-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6\" (UID: \"6d8e0076-8a70-44f0-a7c4-25c1a70a1e89\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.405159 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx795\" (UniqueName: \"kubernetes.io/projected/9679f4b5-ea5b-4998-92e0-08fd965f9b7f-kube-api-access-dx795\") pod \"obo-prometheus-operator-68bc856cb9-pld8h\" (UID: \"9679f4b5-ea5b-4998-92e0-08fd965f9b7f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pld8h" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.444880 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pld8h" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.471737 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6x4mh"] Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.473541 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.476929 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-bpt7c" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.479217 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.486894 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6x4mh"] Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.491041 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d8e0076-8a70-44f0-a7c4-25c1a70a1e89-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6\" (UID: \"6d8e0076-8a70-44f0-a7c4-25c1a70a1e89\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.482289 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d8e0076-8a70-44f0-a7c4-25c1a70a1e89-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6\" (UID: \"6d8e0076-8a70-44f0-a7c4-25c1a70a1e89\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.503339 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39c2f459-1049-49f1-9010-39b354d6f9e9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr\" (UID: \"39c2f459-1049-49f1-9010-39b354d6f9e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.503388 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d8e0076-8a70-44f0-a7c4-25c1a70a1e89-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6\" (UID: \"6d8e0076-8a70-44f0-a7c4-25c1a70a1e89\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.503556 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/39c2f459-1049-49f1-9010-39b354d6f9e9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr\" (UID: \"39c2f459-1049-49f1-9010-39b354d6f9e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.509117 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d8e0076-8a70-44f0-a7c4-25c1a70a1e89-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6\" (UID: \"6d8e0076-8a70-44f0-a7c4-25c1a70a1e89\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.519068 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/39c2f459-1049-49f1-9010-39b354d6f9e9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr\" (UID: \"39c2f459-1049-49f1-9010-39b354d6f9e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.525655 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39c2f459-1049-49f1-9010-39b354d6f9e9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr\" (UID: \"39c2f459-1049-49f1-9010-39b354d6f9e9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.575643 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.586679 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xdtdm"] Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.587914 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.599627 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.601495 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-2b4vq" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.612254 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5cf5a02e-7b7f-453a-9336-c4d98f8470e6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xdtdm\" (UID: \"5cf5a02e-7b7f-453a-9336-c4d98f8470e6\") " pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.612439 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnkw8\" (UniqueName: \"kubernetes.io/projected/5cf5a02e-7b7f-453a-9336-c4d98f8470e6-kube-api-access-lnkw8\") pod \"perses-operator-5bf474d74f-xdtdm\" (UID: \"5cf5a02e-7b7f-453a-9336-c4d98f8470e6\") " pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.612818 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/58b528f7-d9c7-4cde-b7d0-4197972ef92a-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6x4mh\" (UID: \"58b528f7-d9c7-4cde-b7d0-4197972ef92a\") " pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.613030 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dxbz\" (UniqueName: \"kubernetes.io/projected/58b528f7-d9c7-4cde-b7d0-4197972ef92a-kube-api-access-9dxbz\") pod \"observability-operator-59bdc8b94-6x4mh\" (UID: \"58b528f7-d9c7-4cde-b7d0-4197972ef92a\") " pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.631331 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xdtdm"] Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.732943 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/58b528f7-d9c7-4cde-b7d0-4197972ef92a-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6x4mh\" (UID: \"58b528f7-d9c7-4cde-b7d0-4197972ef92a\") " pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.733133 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dxbz\" (UniqueName: \"kubernetes.io/projected/58b528f7-d9c7-4cde-b7d0-4197972ef92a-kube-api-access-9dxbz\") pod \"observability-operator-59bdc8b94-6x4mh\" (UID: \"58b528f7-d9c7-4cde-b7d0-4197972ef92a\") " pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.733199 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5cf5a02e-7b7f-453a-9336-c4d98f8470e6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xdtdm\" (UID: \"5cf5a02e-7b7f-453a-9336-c4d98f8470e6\") " pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.733360 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnkw8\" (UniqueName: \"kubernetes.io/projected/5cf5a02e-7b7f-453a-9336-c4d98f8470e6-kube-api-access-lnkw8\") pod \"perses-operator-5bf474d74f-xdtdm\" (UID: \"5cf5a02e-7b7f-453a-9336-c4d98f8470e6\") " pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.734944 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5cf5a02e-7b7f-453a-9336-c4d98f8470e6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xdtdm\" (UID: \"5cf5a02e-7b7f-453a-9336-c4d98f8470e6\") " pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.746565 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/58b528f7-d9c7-4cde-b7d0-4197972ef92a-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6x4mh\" (UID: \"58b528f7-d9c7-4cde-b7d0-4197972ef92a\") " pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.752213 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dxbz\" (UniqueName: \"kubernetes.io/projected/58b528f7-d9c7-4cde-b7d0-4197972ef92a-kube-api-access-9dxbz\") pod \"observability-operator-59bdc8b94-6x4mh\" (UID: \"58b528f7-d9c7-4cde-b7d0-4197972ef92a\") " pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.795251 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnkw8\" (UniqueName: \"kubernetes.io/projected/5cf5a02e-7b7f-453a-9336-c4d98f8470e6-kube-api-access-lnkw8\") pod \"perses-operator-5bf474d74f-xdtdm\" (UID: \"5cf5a02e-7b7f-453a-9336-c4d98f8470e6\") " pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.903408 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" Feb 04 12:19:27 crc kubenswrapper[4728]: I0204 12:19:27.922553 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" Feb 04 12:19:28 crc kubenswrapper[4728]: I0204 12:19:28.081612 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr"] Feb 04 12:19:28 crc kubenswrapper[4728]: W0204 12:19:28.117284 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39c2f459_1049_49f1_9010_39b354d6f9e9.slice/crio-0fce9383492d91e102374503d5923169e7474e78ac8d70694455ba257d23cec8 WatchSource:0}: Error finding container 0fce9383492d91e102374503d5923169e7474e78ac8d70694455ba257d23cec8: Status 404 returned error can't find the container with id 0fce9383492d91e102374503d5923169e7474e78ac8d70694455ba257d23cec8 Feb 04 12:19:28 crc kubenswrapper[4728]: I0204 12:19:28.184434 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-pld8h"] Feb 04 12:19:28 crc kubenswrapper[4728]: I0204 12:19:28.248171 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6"] Feb 04 12:19:28 crc kubenswrapper[4728]: W0204 12:19:28.266939 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9679f4b5_ea5b_4998_92e0_08fd965f9b7f.slice/crio-5f6be8b95bcc4b847de9572bc8f0ef52630993ec99ea506f8f3c6f2ff8495ce4 WatchSource:0}: Error finding container 5f6be8b95bcc4b847de9572bc8f0ef52630993ec99ea506f8f3c6f2ff8495ce4: Status 404 returned error can't find the container with id 5f6be8b95bcc4b847de9572bc8f0ef52630993ec99ea506f8f3c6f2ff8495ce4 Feb 04 12:19:28 crc kubenswrapper[4728]: W0204 12:19:28.290897 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8e0076_8a70_44f0_a7c4_25c1a70a1e89.slice/crio-94ebfb6b70722b37a0bd53271cf9d72ec7fd9962e0e07c49cf82080326a3190c WatchSource:0}: Error finding container 94ebfb6b70722b37a0bd53271cf9d72ec7fd9962e0e07c49cf82080326a3190c: Status 404 returned error can't find the container with id 94ebfb6b70722b37a0bd53271cf9d72ec7fd9962e0e07c49cf82080326a3190c Feb 04 12:19:29 crc kubenswrapper[4728]: I0204 12:19:28.556038 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr" event={"ID":"39c2f459-1049-49f1-9010-39b354d6f9e9","Type":"ContainerStarted","Data":"0fce9383492d91e102374503d5923169e7474e78ac8d70694455ba257d23cec8"} Feb 04 12:19:29 crc kubenswrapper[4728]: I0204 12:19:28.557286 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6" event={"ID":"6d8e0076-8a70-44f0-a7c4-25c1a70a1e89","Type":"ContainerStarted","Data":"94ebfb6b70722b37a0bd53271cf9d72ec7fd9962e0e07c49cf82080326a3190c"} Feb 04 12:19:29 crc kubenswrapper[4728]: I0204 12:19:28.558237 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pld8h" event={"ID":"9679f4b5-ea5b-4998-92e0-08fd965f9b7f","Type":"ContainerStarted","Data":"5f6be8b95bcc4b847de9572bc8f0ef52630993ec99ea506f8f3c6f2ff8495ce4"} Feb 04 12:19:29 crc kubenswrapper[4728]: I0204 12:19:29.747508 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6x4mh"] Feb 04 12:19:29 crc kubenswrapper[4728]: W0204 12:19:29.794941 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58b528f7_d9c7_4cde_b7d0_4197972ef92a.slice/crio-a2bcbd044797f306063ffcd176cc22c30cc1d6c74a91674e975d643ac8d5971d WatchSource:0}: Error finding container a2bcbd044797f306063ffcd176cc22c30cc1d6c74a91674e975d643ac8d5971d: Status 404 returned error can't find the container with id a2bcbd044797f306063ffcd176cc22c30cc1d6c74a91674e975d643ac8d5971d Feb 04 12:19:29 crc kubenswrapper[4728]: I0204 12:19:29.977965 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xdtdm"] Feb 04 12:19:30 crc kubenswrapper[4728]: W0204 12:19:30.000871 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cf5a02e_7b7f_453a_9336_c4d98f8470e6.slice/crio-d4e926e0ed9fadf07c32d4896f67650a755565226b39078a0c00d858c643758c WatchSource:0}: Error finding container d4e926e0ed9fadf07c32d4896f67650a755565226b39078a0c00d858c643758c: Status 404 returned error can't find the container with id d4e926e0ed9fadf07c32d4896f67650a755565226b39078a0c00d858c643758c Feb 04 12:19:30 crc kubenswrapper[4728]: I0204 12:19:30.607321 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" event={"ID":"5cf5a02e-7b7f-453a-9336-c4d98f8470e6","Type":"ContainerStarted","Data":"d4e926e0ed9fadf07c32d4896f67650a755565226b39078a0c00d858c643758c"} Feb 04 12:19:30 crc kubenswrapper[4728]: I0204 12:19:30.616528 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" event={"ID":"58b528f7-d9c7-4cde-b7d0-4197972ef92a","Type":"ContainerStarted","Data":"a2bcbd044797f306063ffcd176cc22c30cc1d6c74a91674e975d643ac8d5971d"} Feb 04 12:19:40 crc kubenswrapper[4728]: I0204 12:19:40.554188 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:19:40 crc kubenswrapper[4728]: E0204 12:19:40.554951 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.752586 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pld8h" event={"ID":"9679f4b5-ea5b-4998-92e0-08fd965f9b7f","Type":"ContainerStarted","Data":"ae3d2cd9475c633104c3ebd3294f3b9ab3131253a2e295b286f1d697eae0bced"} Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.755557 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr" event={"ID":"39c2f459-1049-49f1-9010-39b354d6f9e9","Type":"ContainerStarted","Data":"bae0f78a2faea2941ae24e1033655027dbeb78f816754eb1bcf83d9b26676110"} Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.758484 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6" event={"ID":"6d8e0076-8a70-44f0-a7c4-25c1a70a1e89","Type":"ContainerStarted","Data":"4323d0873b827cb839a3f21f2881dfcb1b0269577ba70718039032dfa91c42bd"} Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.761058 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" event={"ID":"58b528f7-d9c7-4cde-b7d0-4197972ef92a","Type":"ContainerStarted","Data":"cd0c3a89a4ab1aeee89aa11ce4d1eb2f29918412b5dfc597357900af415ac318"} Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.761804 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.767055 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" event={"ID":"5cf5a02e-7b7f-453a-9336-c4d98f8470e6","Type":"ContainerStarted","Data":"dfc4f35cff09fdae02616c907df591ff9ba0394c7ae05395cb1b753ba16cf6ba"} Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.767146 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.781628 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pld8h" podStartSLOduration=2.227476764 podStartE2EDuration="16.781592182s" podCreationTimestamp="2026-02-04 12:19:27 +0000 UTC" firstStartedPulling="2026-02-04 12:19:28.28761902 +0000 UTC m=+3117.430323405" lastFinishedPulling="2026-02-04 12:19:42.841734438 +0000 UTC m=+3131.984438823" observedRunningTime="2026-02-04 12:19:43.774642103 +0000 UTC m=+3132.917346488" watchObservedRunningTime="2026-02-04 12:19:43.781592182 +0000 UTC m=+3132.924296557" Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.802895 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-6x4mh" podStartSLOduration=3.714877764 podStartE2EDuration="16.8028697s" podCreationTimestamp="2026-02-04 12:19:27 +0000 UTC" firstStartedPulling="2026-02-04 12:19:29.820991428 +0000 UTC m=+3118.963695813" lastFinishedPulling="2026-02-04 12:19:42.908983364 +0000 UTC m=+3132.051687749" observedRunningTime="2026-02-04 12:19:43.79958589 +0000 UTC m=+3132.942290295" watchObservedRunningTime="2026-02-04 12:19:43.8028697 +0000 UTC m=+3132.945574085" Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.824317 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr" podStartSLOduration=2.1311539489999998 podStartE2EDuration="16.824292671s" podCreationTimestamp="2026-02-04 12:19:27 +0000 UTC" firstStartedPulling="2026-02-04 12:19:28.145296466 +0000 UTC m=+3117.288000851" lastFinishedPulling="2026-02-04 12:19:42.838435198 +0000 UTC m=+3131.981139573" observedRunningTime="2026-02-04 12:19:43.815095358 +0000 UTC m=+3132.957799753" watchObservedRunningTime="2026-02-04 12:19:43.824292671 +0000 UTC m=+3132.966997056" Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.853567 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6" podStartSLOduration=2.322042096 podStartE2EDuration="16.853546993s" podCreationTimestamp="2026-02-04 12:19:27 +0000 UTC" firstStartedPulling="2026-02-04 12:19:28.307504424 +0000 UTC m=+3117.450208809" lastFinishedPulling="2026-02-04 12:19:42.839009321 +0000 UTC m=+3131.981713706" observedRunningTime="2026-02-04 12:19:43.847834064 +0000 UTC m=+3132.990538459" watchObservedRunningTime="2026-02-04 12:19:43.853546993 +0000 UTC m=+3132.996251388" Feb 04 12:19:43 crc kubenswrapper[4728]: I0204 12:19:43.886817 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" podStartSLOduration=4.0492244490000004 podStartE2EDuration="16.886796412s" podCreationTimestamp="2026-02-04 12:19:27 +0000 UTC" firstStartedPulling="2026-02-04 12:19:30.007371794 +0000 UTC m=+3119.150076179" lastFinishedPulling="2026-02-04 12:19:42.844943757 +0000 UTC m=+3131.987648142" observedRunningTime="2026-02-04 12:19:43.877382193 +0000 UTC m=+3133.020086578" watchObservedRunningTime="2026-02-04 12:19:43.886796412 +0000 UTC m=+3133.029500807" Feb 04 12:19:44 crc kubenswrapper[4728]: I0204 12:19:44.777415 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.698119 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.701018 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.704674 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.704933 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.705055 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.705216 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.707567 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-76lq9" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.728296 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.790282 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.790371 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.790582 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.790618 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.790649 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.790940 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.791080 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c85vk\" (UniqueName: \"kubernetes.io/projected/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-kube-api-access-c85vk\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.892560 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c85vk\" (UniqueName: \"kubernetes.io/projected/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-kube-api-access-c85vk\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.892643 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.892742 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.892869 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.892895 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.892923 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.893010 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.893511 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.899848 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.900678 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.900877 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.901211 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.915134 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:51 crc kubenswrapper[4728]: I0204 12:19:51.921551 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c85vk\" (UniqueName: \"kubernetes.io/projected/bd40a8f1-c4bd-4c7f-b80e-708802b76a25-kube-api-access-c85vk\") pod \"alertmanager-metric-storage-0\" (UID: \"bd40a8f1-c4bd-4c7f-b80e-708802b76a25\") " pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.030233 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.271116 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.276619 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.283932 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.285577 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.285830 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-dpkpf" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.285945 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.287674 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.288558 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.288684 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.288849 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.306241 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.403072 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.403130 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.403176 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.403218 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlwhk\" (UniqueName: \"kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-kube-api-access-nlwhk\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.403253 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.403291 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.403333 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.403355 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.403408 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.403484 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.505847 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.505929 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.505980 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.506028 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.506058 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.506092 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlwhk\" (UniqueName: \"kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-kube-api-access-nlwhk\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.506120 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.506149 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.506179 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.506195 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.506741 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.507006 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.508669 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.510293 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.511464 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.515602 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.516845 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.519327 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.523718 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.530035 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlwhk\" (UniqueName: \"kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-kube-api-access-nlwhk\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.557176 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:19:52 crc kubenswrapper[4728]: E0204 12:19:52.557600 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.584463 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.597014 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.622526 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:19:52 crc kubenswrapper[4728]: I0204 12:19:52.842739 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"bd40a8f1-c4bd-4c7f-b80e-708802b76a25","Type":"ContainerStarted","Data":"3efcede78b4f3a7778bf4e4a9dd4c9ffb591dcdf6692084a5cdbf2767f64feff"} Feb 04 12:19:53 crc kubenswrapper[4728]: I0204 12:19:53.134135 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:19:53 crc kubenswrapper[4728]: I0204 12:19:53.851878 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706","Type":"ContainerStarted","Data":"2352bbd0bb84b4babc805f0e785c53ed654830bf5b7d852a52e6c0d34cdc8c7c"} Feb 04 12:19:57 crc kubenswrapper[4728]: I0204 12:19:57.924960 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-xdtdm" Feb 04 12:19:59 crc kubenswrapper[4728]: I0204 12:19:59.927075 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"bd40a8f1-c4bd-4c7f-b80e-708802b76a25","Type":"ContainerStarted","Data":"23e6cfe9d2c1458155986c78aeca5d19d81fcb0d4fe05c487ce87b7c2feb6604"} Feb 04 12:19:59 crc kubenswrapper[4728]: I0204 12:19:59.930773 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706","Type":"ContainerStarted","Data":"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e"} Feb 04 12:20:03 crc kubenswrapper[4728]: I0204 12:20:03.554470 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:20:03 crc kubenswrapper[4728]: E0204 12:20:03.555154 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:20:04 crc kubenswrapper[4728]: I0204 12:20:04.986440 4728 generic.go:334] "Generic (PLEG): container finished" podID="bd40a8f1-c4bd-4c7f-b80e-708802b76a25" containerID="23e6cfe9d2c1458155986c78aeca5d19d81fcb0d4fe05c487ce87b7c2feb6604" exitCode=0 Feb 04 12:20:04 crc kubenswrapper[4728]: I0204 12:20:04.986687 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"bd40a8f1-c4bd-4c7f-b80e-708802b76a25","Type":"ContainerDied","Data":"23e6cfe9d2c1458155986c78aeca5d19d81fcb0d4fe05c487ce87b7c2feb6604"} Feb 04 12:20:08 crc kubenswrapper[4728]: I0204 12:20:08.020432 4728 generic.go:334] "Generic (PLEG): container finished" podID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerID="f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e" exitCode=0 Feb 04 12:20:08 crc kubenswrapper[4728]: I0204 12:20:08.020526 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706","Type":"ContainerDied","Data":"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e"} Feb 04 12:20:09 crc kubenswrapper[4728]: I0204 12:20:09.032195 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"bd40a8f1-c4bd-4c7f-b80e-708802b76a25","Type":"ContainerStarted","Data":"cfa89b4948272eb7c077bb587cabc34d6688abfaa23b2bb82f7544eaaee925c9"} Feb 04 12:20:12 crc kubenswrapper[4728]: I0204 12:20:12.058335 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"bd40a8f1-c4bd-4c7f-b80e-708802b76a25","Type":"ContainerStarted","Data":"6cf7684128e97df00cdbc89963a0dafd6f7921fd68beba6b558f8f4735811f59"} Feb 04 12:20:12 crc kubenswrapper[4728]: I0204 12:20:12.058627 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 04 12:20:12 crc kubenswrapper[4728]: I0204 12:20:12.062477 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 04 12:20:12 crc kubenswrapper[4728]: I0204 12:20:12.078834 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.493664219 podStartE2EDuration="21.07881442s" podCreationTimestamp="2026-02-04 12:19:51 +0000 UTC" firstStartedPulling="2026-02-04 12:19:52.6042011 +0000 UTC m=+3141.746905485" lastFinishedPulling="2026-02-04 12:20:08.189351301 +0000 UTC m=+3157.332055686" observedRunningTime="2026-02-04 12:20:12.076442322 +0000 UTC m=+3161.219146697" watchObservedRunningTime="2026-02-04 12:20:12.07881442 +0000 UTC m=+3161.221518805" Feb 04 12:20:16 crc kubenswrapper[4728]: I0204 12:20:16.104158 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706","Type":"ContainerStarted","Data":"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f"} Feb 04 12:20:17 crc kubenswrapper[4728]: I0204 12:20:17.554195 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:20:17 crc kubenswrapper[4728]: E0204 12:20:17.554804 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:20:19 crc kubenswrapper[4728]: I0204 12:20:19.138209 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706","Type":"ContainerStarted","Data":"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e"} Feb 04 12:20:22 crc kubenswrapper[4728]: I0204 12:20:22.171608 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706","Type":"ContainerStarted","Data":"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16"} Feb 04 12:20:22 crc kubenswrapper[4728]: I0204 12:20:22.205420 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=2.643503793 podStartE2EDuration="31.205400854s" podCreationTimestamp="2026-02-04 12:19:51 +0000 UTC" firstStartedPulling="2026-02-04 12:19:53.160400886 +0000 UTC m=+3142.303105271" lastFinishedPulling="2026-02-04 12:20:21.722297937 +0000 UTC m=+3170.865002332" observedRunningTime="2026-02-04 12:20:22.195469862 +0000 UTC m=+3171.338174247" watchObservedRunningTime="2026-02-04 12:20:22.205400854 +0000 UTC m=+3171.348105239" Feb 04 12:20:22 crc kubenswrapper[4728]: I0204 12:20:22.623931 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:22 crc kubenswrapper[4728]: I0204 12:20:22.624244 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:22 crc kubenswrapper[4728]: I0204 12:20:22.627984 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:23 crc kubenswrapper[4728]: I0204 12:20:23.181435 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.479325 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.479948 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="2607e08f-8499-4718-95b4-1377b153c155" containerName="openstackclient" containerID="cri-o://4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3" gracePeriod=2 Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.491438 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.516228 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 04 12:20:24 crc kubenswrapper[4728]: E0204 12:20:24.516778 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2607e08f-8499-4718-95b4-1377b153c155" containerName="openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.516801 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2607e08f-8499-4718-95b4-1377b153c155" containerName="openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.517042 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2607e08f-8499-4718-95b4-1377b153c155" containerName="openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.517947 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.521374 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2607e08f-8499-4718-95b4-1377b153c155" podUID="f6dfc933-2564-456a-ad35-ae3bf8afdbd3" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.530989 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.635951 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.636063 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-openstack-config-secret\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.636226 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-openstack-config\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.636384 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq4l8\" (UniqueName: \"kubernetes.io/projected/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-kube-api-access-fq4l8\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.738319 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq4l8\" (UniqueName: \"kubernetes.io/projected/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-kube-api-access-fq4l8\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.738438 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.738494 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-openstack-config-secret\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.738550 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-openstack-config\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.739962 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-openstack-config\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.756388 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.756690 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-openstack-config-secret\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.758897 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq4l8\" (UniqueName: \"kubernetes.io/projected/f6dfc933-2564-456a-ad35-ae3bf8afdbd3-kube-api-access-fq4l8\") pod \"openstackclient\" (UID: \"f6dfc933-2564-456a-ad35-ae3bf8afdbd3\") " pod="openstack/openstackclient" Feb 04 12:20:24 crc kubenswrapper[4728]: I0204 12:20:24.846709 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:20:25 crc kubenswrapper[4728]: I0204 12:20:25.416876 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 04 12:20:25 crc kubenswrapper[4728]: W0204 12:20:25.420344 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6dfc933_2564_456a_ad35_ae3bf8afdbd3.slice/crio-bf8095b2aa151f5a404e6465bad00b7ed0efc5e30b4e98277bcbbc9ba655fc36 WatchSource:0}: Error finding container bf8095b2aa151f5a404e6465bad00b7ed0efc5e30b4e98277bcbbc9ba655fc36: Status 404 returned error can't find the container with id bf8095b2aa151f5a404e6465bad00b7ed0efc5e30b4e98277bcbbc9ba655fc36 Feb 04 12:20:25 crc kubenswrapper[4728]: I0204 12:20:25.927920 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.212456 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f6dfc933-2564-456a-ad35-ae3bf8afdbd3","Type":"ContainerStarted","Data":"89b2ef498d22aaaf045c817871c6cb21725f326d7c37c5963ca6bd975c8647c1"} Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.213038 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="prometheus" containerID="cri-o://2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f" gracePeriod=600 Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.213080 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="config-reloader" containerID="cri-o://85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e" gracePeriod=600 Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.213046 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="thanos-sidecar" containerID="cri-o://32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16" gracePeriod=600 Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.213064 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f6dfc933-2564-456a-ad35-ae3bf8afdbd3","Type":"ContainerStarted","Data":"bf8095b2aa151f5a404e6465bad00b7ed0efc5e30b4e98277bcbbc9ba655fc36"} Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.250875 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.25085628 podStartE2EDuration="2.25085628s" podCreationTimestamp="2026-02-04 12:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 12:20:26.238873388 +0000 UTC m=+3175.381577773" watchObservedRunningTime="2026-02-04 12:20:26.25085628 +0000 UTC m=+3175.393560665" Feb 04 12:20:26 crc kubenswrapper[4728]: E0204 12:20:26.561410 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eeeb853_d0f4_43d5_b5b0_f71eb0fb3706.slice/crio-85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eeeb853_d0f4_43d5_b5b0_f71eb0fb3706.slice/crio-conmon-85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2607e08f_8499_4718_95b4_1377b153c155.slice/crio-conmon-4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3.scope\": RecentStats: unable to find data in memory cache]" Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.733014 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.897880 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh68c\" (UniqueName: \"kubernetes.io/projected/2607e08f-8499-4718-95b4-1377b153c155-kube-api-access-lh68c\") pod \"2607e08f-8499-4718-95b4-1377b153c155\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.897991 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-combined-ca-bundle\") pod \"2607e08f-8499-4718-95b4-1377b153c155\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.898038 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2607e08f-8499-4718-95b4-1377b153c155-openstack-config\") pod \"2607e08f-8499-4718-95b4-1377b153c155\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.898109 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-openstack-config-secret\") pod \"2607e08f-8499-4718-95b4-1377b153c155\" (UID: \"2607e08f-8499-4718-95b4-1377b153c155\") " Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.904493 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2607e08f-8499-4718-95b4-1377b153c155-kube-api-access-lh68c" (OuterVolumeSpecName: "kube-api-access-lh68c") pod "2607e08f-8499-4718-95b4-1377b153c155" (UID: "2607e08f-8499-4718-95b4-1377b153c155"). InnerVolumeSpecName "kube-api-access-lh68c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.935540 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2607e08f-8499-4718-95b4-1377b153c155" (UID: "2607e08f-8499-4718-95b4-1377b153c155"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.935623 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2607e08f-8499-4718-95b4-1377b153c155-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2607e08f-8499-4718-95b4-1377b153c155" (UID: "2607e08f-8499-4718-95b4-1377b153c155"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:20:26 crc kubenswrapper[4728]: I0204 12:20:26.957892 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2607e08f-8499-4718-95b4-1377b153c155" (UID: "2607e08f-8499-4718-95b4-1377b153c155"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.000595 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh68c\" (UniqueName: \"kubernetes.io/projected/2607e08f-8499-4718-95b4-1377b153c155-kube-api-access-lh68c\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.000634 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.000646 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2607e08f-8499-4718-95b4-1377b153c155-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.000656 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2607e08f-8499-4718-95b4-1377b153c155-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.221771 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.227361 4728 generic.go:334] "Generic (PLEG): container finished" podID="2607e08f-8499-4718-95b4-1377b153c155" containerID="4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3" exitCode=137 Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.227479 4728 scope.go:117] "RemoveContainer" containerID="4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.227672 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.256465 4728 generic.go:334] "Generic (PLEG): container finished" podID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerID="32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16" exitCode=0 Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.256512 4728 generic.go:334] "Generic (PLEG): container finished" podID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerID="85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e" exitCode=0 Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.256521 4728 generic.go:334] "Generic (PLEG): container finished" podID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerID="2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f" exitCode=0 Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.258127 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.258386 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706","Type":"ContainerDied","Data":"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16"} Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.258425 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706","Type":"ContainerDied","Data":"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e"} Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.258437 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706","Type":"ContainerDied","Data":"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f"} Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.258447 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706","Type":"ContainerDied","Data":"2352bbd0bb84b4babc805f0e785c53ed654830bf5b7d852a52e6c0d34cdc8c7c"} Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.281636 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2607e08f-8499-4718-95b4-1377b153c155" podUID="f6dfc933-2564-456a-ad35-ae3bf8afdbd3" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.306157 4728 scope.go:117] "RemoveContainer" containerID="4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3" Feb 04 12:20:27 crc kubenswrapper[4728]: E0204 12:20:27.314003 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3\": container with ID starting with 4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3 not found: ID does not exist" containerID="4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.314066 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3"} err="failed to get container status \"4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3\": rpc error: code = NotFound desc = could not find container \"4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3\": container with ID starting with 4d4899ddf0d1fd0a396a83c413008c1e5efa20cd26c9f19a00d5982b2f9ea8b3 not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.325052 4728 scope.go:117] "RemoveContainer" containerID="32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.403051 4728 scope.go:117] "RemoveContainer" containerID="85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.410864 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-web-config\") pod \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.411482 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config-out\") pod \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.411547 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlwhk\" (UniqueName: \"kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-kube-api-access-nlwhk\") pod \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.411696 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-tls-assets\") pod \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.411730 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config\") pod \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.411797 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-1\") pod \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.411873 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-0\") pod \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.411924 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-thanos-prometheus-http-client-file\") pod \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.411970 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.412018 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-2\") pod \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\" (UID: \"8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706\") " Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.413864 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" (UID: "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.416854 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" (UID: "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.417317 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" (UID: "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.417364 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" (UID: "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.439039 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" (UID: "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.440923 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config" (OuterVolumeSpecName: "config") pod "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" (UID: "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.442023 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config-out" (OuterVolumeSpecName: "config-out") pod "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" (UID: "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.442482 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" (UID: "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.443915 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-kube-api-access-nlwhk" (OuterVolumeSpecName: "kube-api-access-nlwhk") pod "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" (UID: "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706"). InnerVolumeSpecName "kube-api-access-nlwhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.455047 4728 scope.go:117] "RemoveContainer" containerID="2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.482617 4728 scope.go:117] "RemoveContainer" containerID="f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.489965 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-web-config" (OuterVolumeSpecName: "web-config") pod "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" (UID: "8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.517171 4728 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config-out\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.517209 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlwhk\" (UniqueName: \"kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-kube-api-access-nlwhk\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.517222 4728 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.517235 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-config\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.517245 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.517256 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.517268 4728 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.517309 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.517323 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.517337 4728 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706-web-config\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.518589 4728 scope.go:117] "RemoveContainer" containerID="32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16" Feb 04 12:20:27 crc kubenswrapper[4728]: E0204 12:20:27.519459 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16\": container with ID starting with 32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16 not found: ID does not exist" containerID="32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.519539 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16"} err="failed to get container status \"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16\": rpc error: code = NotFound desc = could not find container \"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16\": container with ID starting with 32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16 not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.519598 4728 scope.go:117] "RemoveContainer" containerID="85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e" Feb 04 12:20:27 crc kubenswrapper[4728]: E0204 12:20:27.520003 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e\": container with ID starting with 85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e not found: ID does not exist" containerID="85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.520050 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e"} err="failed to get container status \"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e\": rpc error: code = NotFound desc = could not find container \"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e\": container with ID starting with 85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.520083 4728 scope.go:117] "RemoveContainer" containerID="2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f" Feb 04 12:20:27 crc kubenswrapper[4728]: E0204 12:20:27.520885 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f\": container with ID starting with 2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f not found: ID does not exist" containerID="2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.520917 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f"} err="failed to get container status \"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f\": rpc error: code = NotFound desc = could not find container \"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f\": container with ID starting with 2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.520936 4728 scope.go:117] "RemoveContainer" containerID="f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e" Feb 04 12:20:27 crc kubenswrapper[4728]: E0204 12:20:27.521463 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e\": container with ID starting with f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e not found: ID does not exist" containerID="f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.521492 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e"} err="failed to get container status \"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e\": rpc error: code = NotFound desc = could not find container \"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e\": container with ID starting with f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.521511 4728 scope.go:117] "RemoveContainer" containerID="32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.522116 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16"} err="failed to get container status \"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16\": rpc error: code = NotFound desc = could not find container \"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16\": container with ID starting with 32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16 not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.522163 4728 scope.go:117] "RemoveContainer" containerID="85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.522584 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e"} err="failed to get container status \"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e\": rpc error: code = NotFound desc = could not find container \"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e\": container with ID starting with 85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.522632 4728 scope.go:117] "RemoveContainer" containerID="2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.523333 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f"} err="failed to get container status \"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f\": rpc error: code = NotFound desc = could not find container \"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f\": container with ID starting with 2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.523362 4728 scope.go:117] "RemoveContainer" containerID="f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.525052 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e"} err="failed to get container status \"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e\": rpc error: code = NotFound desc = could not find container \"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e\": container with ID starting with f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.525112 4728 scope.go:117] "RemoveContainer" containerID="32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.525680 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16"} err="failed to get container status \"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16\": rpc error: code = NotFound desc = could not find container \"32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16\": container with ID starting with 32d13f3dfe5d37e395a12603b792d56e1e7fd1af824431632f361020aad08f16 not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.525714 4728 scope.go:117] "RemoveContainer" containerID="85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.525972 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e"} err="failed to get container status \"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e\": rpc error: code = NotFound desc = could not find container \"85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e\": container with ID starting with 85c84aae5ae104651bf2e47eda45dda191083509b8aa3141e9f4626382cd084e not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.525999 4728 scope.go:117] "RemoveContainer" containerID="2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.526383 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f"} err="failed to get container status \"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f\": rpc error: code = NotFound desc = could not find container \"2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f\": container with ID starting with 2c1de00124a387558c902babc4167596d801ea429db5954bb84016b4aa9f893f not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.526409 4728 scope.go:117] "RemoveContainer" containerID="f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.526699 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e"} err="failed to get container status \"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e\": rpc error: code = NotFound desc = could not find container \"f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e\": container with ID starting with f600a906790ef444741cc2c02c9a4ad0263de70a8f3e4657add91766c7daa66e not found: ID does not exist" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.537020 4728 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.566293 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2607e08f-8499-4718-95b4-1377b153c155" path="/var/lib/kubelet/pods/2607e08f-8499-4718-95b4-1377b153c155/volumes" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.608880 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.619525 4728 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.630958 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.644595 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:20:27 crc kubenswrapper[4728]: E0204 12:20:27.645258 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="config-reloader" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.645282 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="config-reloader" Feb 04 12:20:27 crc kubenswrapper[4728]: E0204 12:20:27.645301 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="prometheus" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.645310 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="prometheus" Feb 04 12:20:27 crc kubenswrapper[4728]: E0204 12:20:27.645325 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="thanos-sidecar" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.645331 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="thanos-sidecar" Feb 04 12:20:27 crc kubenswrapper[4728]: E0204 12:20:27.645347 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="init-config-reloader" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.645355 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="init-config-reloader" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.648519 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="thanos-sidecar" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.648568 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="prometheus" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.648591 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" containerName="config-reloader" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.652636 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.656300 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.656599 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.656772 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-dpkpf" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.658663 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.658964 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.659109 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.659180 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.662989 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.665400 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.672616 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824098 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824183 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824206 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824239 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824284 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjcqg\" (UniqueName: \"kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-kube-api-access-jjcqg\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824306 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824327 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824396 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824495 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6991a969-9b71-413a-b9b9-d5ff15521b0d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824542 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824835 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-config\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.824888 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.825052 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927119 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927166 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927206 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927246 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjcqg\" (UniqueName: \"kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-kube-api-access-jjcqg\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927265 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927288 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927322 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927322 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927345 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6991a969-9b71-413a-b9b9-d5ff15521b0d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927364 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927416 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-config\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927431 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927481 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.927508 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.930159 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.931675 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.932661 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.932677 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.934792 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.935580 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6991a969-9b71-413a-b9b9-d5ff15521b0d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.937241 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-config\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.939239 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.940055 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.940958 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.946445 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.954005 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjcqg\" (UniqueName: \"kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-kube-api-access-jjcqg\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.965677 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:27 crc kubenswrapper[4728]: I0204 12:20:27.981880 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:28 crc kubenswrapper[4728]: I0204 12:20:28.438918 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:20:28 crc kubenswrapper[4728]: W0204 12:20:28.446605 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6991a969_9b71_413a_b9b9_d5ff15521b0d.slice/crio-e6a7c7370d92136c9f379640c5235894241b63da29b36bb3017de7aceac3c959 WatchSource:0}: Error finding container e6a7c7370d92136c9f379640c5235894241b63da29b36bb3017de7aceac3c959: Status 404 returned error can't find the container with id e6a7c7370d92136c9f379640c5235894241b63da29b36bb3017de7aceac3c959 Feb 04 12:20:29 crc kubenswrapper[4728]: I0204 12:20:29.281824 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6991a969-9b71-413a-b9b9-d5ff15521b0d","Type":"ContainerStarted","Data":"e6a7c7370d92136c9f379640c5235894241b63da29b36bb3017de7aceac3c959"} Feb 04 12:20:29 crc kubenswrapper[4728]: I0204 12:20:29.565236 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706" path="/var/lib/kubelet/pods/8eeeb853-d0f4-43d5-b5b0-f71eb0fb3706/volumes" Feb 04 12:20:32 crc kubenswrapper[4728]: I0204 12:20:32.308436 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6991a969-9b71-413a-b9b9-d5ff15521b0d","Type":"ContainerStarted","Data":"16c0c36c7a22fc95cacf02b5c5448633541cedcd60157fa84f1a57e7b29fce3d"} Feb 04 12:20:32 crc kubenswrapper[4728]: I0204 12:20:32.554408 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:20:32 crc kubenswrapper[4728]: E0204 12:20:32.554877 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:20:39 crc kubenswrapper[4728]: I0204 12:20:39.377420 4728 generic.go:334] "Generic (PLEG): container finished" podID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerID="16c0c36c7a22fc95cacf02b5c5448633541cedcd60157fa84f1a57e7b29fce3d" exitCode=0 Feb 04 12:20:39 crc kubenswrapper[4728]: I0204 12:20:39.377529 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6991a969-9b71-413a-b9b9-d5ff15521b0d","Type":"ContainerDied","Data":"16c0c36c7a22fc95cacf02b5c5448633541cedcd60157fa84f1a57e7b29fce3d"} Feb 04 12:20:40 crc kubenswrapper[4728]: I0204 12:20:40.390518 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6991a969-9b71-413a-b9b9-d5ff15521b0d","Type":"ContainerStarted","Data":"dd528c4c98aad143de1cff5d7e622e6db0c706f3762f117d022aa2e9ed19f0a8"} Feb 04 12:20:43 crc kubenswrapper[4728]: I0204 12:20:43.418194 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6991a969-9b71-413a-b9b9-d5ff15521b0d","Type":"ContainerStarted","Data":"2ec132ccab11924905222f366ec3469a4cc92db544ef5726d9d37ddc1c4d09e8"} Feb 04 12:20:43 crc kubenswrapper[4728]: I0204 12:20:43.418889 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6991a969-9b71-413a-b9b9-d5ff15521b0d","Type":"ContainerStarted","Data":"8807779abb6a17e2da45db858539c3667442b97747b4ba316ab1bd1ddf952d2f"} Feb 04 12:20:43 crc kubenswrapper[4728]: I0204 12:20:43.489875 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.489857241 podStartE2EDuration="16.489857241s" podCreationTimestamp="2026-02-04 12:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 12:20:43.478838323 +0000 UTC m=+3192.621542708" watchObservedRunningTime="2026-02-04 12:20:43.489857241 +0000 UTC m=+3192.632561626" Feb 04 12:20:44 crc kubenswrapper[4728]: I0204 12:20:44.554357 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:20:45 crc kubenswrapper[4728]: I0204 12:20:45.439074 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"2c6acb80889046d92069b3298c3eed8cc5e6b53fb2b7561199c1feb4cd3c2597"} Feb 04 12:20:47 crc kubenswrapper[4728]: I0204 12:20:47.982965 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:57 crc kubenswrapper[4728]: I0204 12:20:57.983407 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:57 crc kubenswrapper[4728]: I0204 12:20:57.989505 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 04 12:20:58 crc kubenswrapper[4728]: I0204 12:20:58.577880 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:02 crc kubenswrapper[4728]: I0204 12:23:02.119244 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6dcb54f59-lnlx2_3b49d7d8-7c63-482c-b882-25c01e798afe/manager/0.log" Feb 04 12:23:04 crc kubenswrapper[4728]: I0204 12:23:04.213526 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:23:04 crc kubenswrapper[4728]: I0204 12:23:04.215346 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="prometheus" containerID="cri-o://dd528c4c98aad143de1cff5d7e622e6db0c706f3762f117d022aa2e9ed19f0a8" gracePeriod=600 Feb 04 12:23:04 crc kubenswrapper[4728]: I0204 12:23:04.215443 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="thanos-sidecar" containerID="cri-o://2ec132ccab11924905222f366ec3469a4cc92db544ef5726d9d37ddc1c4d09e8" gracePeriod=600 Feb 04 12:23:04 crc kubenswrapper[4728]: I0204 12:23:04.215410 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="config-reloader" containerID="cri-o://8807779abb6a17e2da45db858539c3667442b97747b4ba316ab1bd1ddf952d2f" gracePeriod=600 Feb 04 12:23:04 crc kubenswrapper[4728]: I0204 12:23:04.796330 4728 generic.go:334] "Generic (PLEG): container finished" podID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerID="2ec132ccab11924905222f366ec3469a4cc92db544ef5726d9d37ddc1c4d09e8" exitCode=0 Feb 04 12:23:04 crc kubenswrapper[4728]: I0204 12:23:04.796381 4728 generic.go:334] "Generic (PLEG): container finished" podID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerID="8807779abb6a17e2da45db858539c3667442b97747b4ba316ab1bd1ddf952d2f" exitCode=0 Feb 04 12:23:04 crc kubenswrapper[4728]: I0204 12:23:04.796397 4728 generic.go:334] "Generic (PLEG): container finished" podID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerID="dd528c4c98aad143de1cff5d7e622e6db0c706f3762f117d022aa2e9ed19f0a8" exitCode=0 Feb 04 12:23:04 crc kubenswrapper[4728]: I0204 12:23:04.796390 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6991a969-9b71-413a-b9b9-d5ff15521b0d","Type":"ContainerDied","Data":"2ec132ccab11924905222f366ec3469a4cc92db544ef5726d9d37ddc1c4d09e8"} Feb 04 12:23:04 crc kubenswrapper[4728]: I0204 12:23:04.796440 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6991a969-9b71-413a-b9b9-d5ff15521b0d","Type":"ContainerDied","Data":"8807779abb6a17e2da45db858539c3667442b97747b4ba316ab1bd1ddf952d2f"} Feb 04 12:23:04 crc kubenswrapper[4728]: I0204 12:23:04.796454 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6991a969-9b71-413a-b9b9-d5ff15521b0d","Type":"ContainerDied","Data":"dd528c4c98aad143de1cff5d7e622e6db0c706f3762f117d022aa2e9ed19f0a8"} Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.317690 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.448398 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.448461 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491042 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-tls-assets\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491112 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-thanos-prometheus-http-client-file\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491209 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjcqg\" (UniqueName: \"kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-kube-api-access-jjcqg\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491260 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-1\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491327 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491374 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-0\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491418 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6991a969-9b71-413a-b9b9-d5ff15521b0d-config-out\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491486 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491518 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-2\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491608 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-secret-combined-ca-bundle\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491701 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491815 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.491893 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-config\") pod \"6991a969-9b71-413a-b9b9-d5ff15521b0d\" (UID: \"6991a969-9b71-413a-b9b9-d5ff15521b0d\") " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.497962 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.499131 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.499947 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.499968 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.500021 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.500051 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6991a969-9b71-413a-b9b9-d5ff15521b0d-config-out" (OuterVolumeSpecName: "config-out") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.501911 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-config" (OuterVolumeSpecName: "config") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.502698 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.505455 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.509172 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.509220 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-kube-api-access-jjcqg" (OuterVolumeSpecName: "kube-api-access-jjcqg") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "kube-api-access-jjcqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.509307 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.593965 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config" (OuterVolumeSpecName: "web-config") pod "6991a969-9b71-413a-b9b9-d5ff15521b0d" (UID: "6991a969-9b71-413a-b9b9-d5ff15521b0d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.595729 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.595794 4728 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6991a969-9b71-413a-b9b9-d5ff15521b0d-config-out\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.595825 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.595841 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.595857 4728 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.595870 4728 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.595946 4728 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.595962 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-config\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.595974 4728 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.596013 4728 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.596028 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjcqg\" (UniqueName: \"kubernetes.io/projected/6991a969-9b71-413a-b9b9-d5ff15521b0d-kube-api-access-jjcqg\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.596041 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6991a969-9b71-413a-b9b9-d5ff15521b0d-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.596053 4728 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6991a969-9b71-413a-b9b9-d5ff15521b0d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.623724 4728 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.699067 4728 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.813564 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6991a969-9b71-413a-b9b9-d5ff15521b0d","Type":"ContainerDied","Data":"e6a7c7370d92136c9f379640c5235894241b63da29b36bb3017de7aceac3c959"} Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.813633 4728 scope.go:117] "RemoveContainer" containerID="2ec132ccab11924905222f366ec3469a4cc92db544ef5726d9d37ddc1c4d09e8" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.813776 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.844135 4728 scope.go:117] "RemoveContainer" containerID="8807779abb6a17e2da45db858539c3667442b97747b4ba316ab1bd1ddf952d2f" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.869000 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.879202 4728 scope.go:117] "RemoveContainer" containerID="dd528c4c98aad143de1cff5d7e622e6db0c706f3762f117d022aa2e9ed19f0a8" Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.890285 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:23:05 crc kubenswrapper[4728]: I0204 12:23:05.908421 4728 scope.go:117] "RemoveContainer" containerID="16c0c36c7a22fc95cacf02b5c5448633541cedcd60157fa84f1a57e7b29fce3d" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.693637 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:23:06 crc kubenswrapper[4728]: E0204 12:23:06.694124 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="init-config-reloader" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.694143 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="init-config-reloader" Feb 04 12:23:06 crc kubenswrapper[4728]: E0204 12:23:06.694166 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="prometheus" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.694174 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="prometheus" Feb 04 12:23:06 crc kubenswrapper[4728]: E0204 12:23:06.694189 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="thanos-sidecar" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.694199 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="thanos-sidecar" Feb 04 12:23:06 crc kubenswrapper[4728]: E0204 12:23:06.694229 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="config-reloader" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.694238 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="config-reloader" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.694454 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="prometheus" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.694482 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="config-reloader" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.694501 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" containerName="thanos-sidecar" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.696617 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.699434 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.699477 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.699646 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.700728 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.701169 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.701331 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.701536 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.702703 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-dpkpf" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.705932 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.729861 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.819918 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.819958 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.820015 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.820042 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rx79\" (UniqueName: \"kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-kube-api-access-6rx79\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.820058 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.820076 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.820142 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.820181 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.820203 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.820221 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.820259 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.820283 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.820306 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922198 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922538 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922571 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922601 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922651 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922675 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922702 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922779 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922804 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922873 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922902 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rx79\" (UniqueName: \"kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-kube-api-access-6rx79\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922923 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.922948 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.923340 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.923380 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.924155 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.924394 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.927987 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.928020 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.928536 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.933647 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.933992 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.934158 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.934213 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.934365 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:06 crc kubenswrapper[4728]: I0204 12:23:06.941866 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rx79\" (UniqueName: \"kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-kube-api-access-6rx79\") pod \"prometheus-metric-storage-0\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:07 crc kubenswrapper[4728]: I0204 12:23:07.019454 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:07 crc kubenswrapper[4728]: I0204 12:23:07.539677 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:23:07 crc kubenswrapper[4728]: I0204 12:23:07.567939 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6991a969-9b71-413a-b9b9-d5ff15521b0d" path="/var/lib/kubelet/pods/6991a969-9b71-413a-b9b9-d5ff15521b0d/volumes" Feb 04 12:23:07 crc kubenswrapper[4728]: I0204 12:23:07.836712 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3","Type":"ContainerStarted","Data":"29f22e104230e60fbf5ac1548bcbf6b62e786ee2ca96c04dd9a023c0b623df8c"} Feb 04 12:23:10 crc kubenswrapper[4728]: I0204 12:23:10.811564 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-496db"] Feb 04 12:23:10 crc kubenswrapper[4728]: I0204 12:23:10.814623 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:10 crc kubenswrapper[4728]: I0204 12:23:10.830410 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-496db"] Feb 04 12:23:10 crc kubenswrapper[4728]: I0204 12:23:10.903313 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55b7h\" (UniqueName: \"kubernetes.io/projected/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-kube-api-access-55b7h\") pod \"redhat-operators-496db\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:10 crc kubenswrapper[4728]: I0204 12:23:10.903806 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-utilities\") pod \"redhat-operators-496db\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:10 crc kubenswrapper[4728]: I0204 12:23:10.903854 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-catalog-content\") pod \"redhat-operators-496db\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.005928 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-utilities\") pod \"redhat-operators-496db\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.006052 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-catalog-content\") pod \"redhat-operators-496db\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.006111 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55b7h\" (UniqueName: \"kubernetes.io/projected/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-kube-api-access-55b7h\") pod \"redhat-operators-496db\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.006447 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-utilities\") pod \"redhat-operators-496db\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.006461 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-catalog-content\") pod \"redhat-operators-496db\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.031017 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55b7h\" (UniqueName: \"kubernetes.io/projected/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-kube-api-access-55b7h\") pod \"redhat-operators-496db\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.144099 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:11 crc kubenswrapper[4728]: W0204 12:23:11.599311 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1dcf3bd_86e8_4d51_a5db_66ecc7149500.slice/crio-ffdf20a76b76b240ae8ffbccfbca6e1fe519f1638ef0dc9648f27ab1692e7675 WatchSource:0}: Error finding container ffdf20a76b76b240ae8ffbccfbca6e1fe519f1638ef0dc9648f27ab1692e7675: Status 404 returned error can't find the container with id ffdf20a76b76b240ae8ffbccfbca6e1fe519f1638ef0dc9648f27ab1692e7675 Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.608544 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-496db"] Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.882134 4728 generic.go:334] "Generic (PLEG): container finished" podID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerID="f5c13e1522e06a5af04b7d89b8de269e822c7b7855b182fcb0fba672a3ad0d2e" exitCode=0 Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.882243 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-496db" event={"ID":"d1dcf3bd-86e8-4d51-a5db-66ecc7149500","Type":"ContainerDied","Data":"f5c13e1522e06a5af04b7d89b8de269e822c7b7855b182fcb0fba672a3ad0d2e"} Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.882302 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-496db" event={"ID":"d1dcf3bd-86e8-4d51-a5db-66ecc7149500","Type":"ContainerStarted","Data":"ffdf20a76b76b240ae8ffbccfbca6e1fe519f1638ef0dc9648f27ab1692e7675"} Feb 04 12:23:11 crc kubenswrapper[4728]: I0204 12:23:11.884519 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3","Type":"ContainerStarted","Data":"4c8212153d2a58c1172488fc4123805a7e3081ddf1997ba60a2ff504df17f97e"} Feb 04 12:23:13 crc kubenswrapper[4728]: I0204 12:23:13.911746 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-496db" event={"ID":"d1dcf3bd-86e8-4d51-a5db-66ecc7149500","Type":"ContainerStarted","Data":"468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a"} Feb 04 12:23:14 crc kubenswrapper[4728]: I0204 12:23:14.926532 4728 generic.go:334] "Generic (PLEG): container finished" podID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerID="468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a" exitCode=0 Feb 04 12:23:14 crc kubenswrapper[4728]: I0204 12:23:14.926619 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-496db" event={"ID":"d1dcf3bd-86e8-4d51-a5db-66ecc7149500","Type":"ContainerDied","Data":"468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a"} Feb 04 12:23:17 crc kubenswrapper[4728]: I0204 12:23:17.997179 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-496db" event={"ID":"d1dcf3bd-86e8-4d51-a5db-66ecc7149500","Type":"ContainerStarted","Data":"6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8"} Feb 04 12:23:18 crc kubenswrapper[4728]: I0204 12:23:18.000722 4728 generic.go:334] "Generic (PLEG): container finished" podID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerID="4c8212153d2a58c1172488fc4123805a7e3081ddf1997ba60a2ff504df17f97e" exitCode=0 Feb 04 12:23:18 crc kubenswrapper[4728]: I0204 12:23:18.000781 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3","Type":"ContainerDied","Data":"4c8212153d2a58c1172488fc4123805a7e3081ddf1997ba60a2ff504df17f97e"} Feb 04 12:23:18 crc kubenswrapper[4728]: I0204 12:23:18.029366 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-496db" podStartSLOduration=2.473843125 podStartE2EDuration="8.029346368s" podCreationTimestamp="2026-02-04 12:23:10 +0000 UTC" firstStartedPulling="2026-02-04 12:23:11.883784669 +0000 UTC m=+3341.026489054" lastFinishedPulling="2026-02-04 12:23:17.439287912 +0000 UTC m=+3346.581992297" observedRunningTime="2026-02-04 12:23:18.026875019 +0000 UTC m=+3347.169579404" watchObservedRunningTime="2026-02-04 12:23:18.029346368 +0000 UTC m=+3347.172050763" Feb 04 12:23:19 crc kubenswrapper[4728]: I0204 12:23:19.012282 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3","Type":"ContainerStarted","Data":"57bb2acba274931978eb2609877b0a41953c42c5b17804041cab34eaaa1f4356"} Feb 04 12:23:21 crc kubenswrapper[4728]: I0204 12:23:21.144434 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:21 crc kubenswrapper[4728]: I0204 12:23:21.145055 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:22 crc kubenswrapper[4728]: I0204 12:23:22.045208 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3","Type":"ContainerStarted","Data":"b10b1ea8ec2398f79d1b3f41651b1b62231ff73a787bb0e6fede2edf6736f632"} Feb 04 12:23:22 crc kubenswrapper[4728]: I0204 12:23:22.045594 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3","Type":"ContainerStarted","Data":"12d6578e3e1eb4c34cfffefc544de6afb9af804cfbe32f9da79824b5225546a5"} Feb 04 12:23:22 crc kubenswrapper[4728]: I0204 12:23:22.094108 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.094082615 podStartE2EDuration="16.094082615s" podCreationTimestamp="2026-02-04 12:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 12:23:22.076877977 +0000 UTC m=+3351.219582382" watchObservedRunningTime="2026-02-04 12:23:22.094082615 +0000 UTC m=+3351.236787000" Feb 04 12:23:22 crc kubenswrapper[4728]: I0204 12:23:22.192189 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-496db" podUID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerName="registry-server" probeResult="failure" output=< Feb 04 12:23:22 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 04 12:23:22 crc kubenswrapper[4728]: > Feb 04 12:23:27 crc kubenswrapper[4728]: I0204 12:23:27.020438 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:31 crc kubenswrapper[4728]: I0204 12:23:31.189215 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:31 crc kubenswrapper[4728]: I0204 12:23:31.247963 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:31 crc kubenswrapper[4728]: I0204 12:23:31.430198 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-496db"] Feb 04 12:23:33 crc kubenswrapper[4728]: I0204 12:23:33.174788 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-496db" podUID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerName="registry-server" containerID="cri-o://6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8" gracePeriod=2 Feb 04 12:23:33 crc kubenswrapper[4728]: I0204 12:23:33.660002 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:33 crc kubenswrapper[4728]: I0204 12:23:33.802311 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55b7h\" (UniqueName: \"kubernetes.io/projected/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-kube-api-access-55b7h\") pod \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " Feb 04 12:23:33 crc kubenswrapper[4728]: I0204 12:23:33.802401 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-catalog-content\") pod \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " Feb 04 12:23:33 crc kubenswrapper[4728]: I0204 12:23:33.802683 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-utilities\") pod \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\" (UID: \"d1dcf3bd-86e8-4d51-a5db-66ecc7149500\") " Feb 04 12:23:33 crc kubenswrapper[4728]: I0204 12:23:33.803896 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-utilities" (OuterVolumeSpecName: "utilities") pod "d1dcf3bd-86e8-4d51-a5db-66ecc7149500" (UID: "d1dcf3bd-86e8-4d51-a5db-66ecc7149500"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:23:33 crc kubenswrapper[4728]: I0204 12:23:33.810318 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-kube-api-access-55b7h" (OuterVolumeSpecName: "kube-api-access-55b7h") pod "d1dcf3bd-86e8-4d51-a5db-66ecc7149500" (UID: "d1dcf3bd-86e8-4d51-a5db-66ecc7149500"). InnerVolumeSpecName "kube-api-access-55b7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:23:33 crc kubenswrapper[4728]: I0204 12:23:33.905391 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:33 crc kubenswrapper[4728]: I0204 12:23:33.905614 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55b7h\" (UniqueName: \"kubernetes.io/projected/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-kube-api-access-55b7h\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:33 crc kubenswrapper[4728]: I0204 12:23:33.910699 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1dcf3bd-86e8-4d51-a5db-66ecc7149500" (UID: "d1dcf3bd-86e8-4d51-a5db-66ecc7149500"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.008538 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1dcf3bd-86e8-4d51-a5db-66ecc7149500-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.184994 4728 generic.go:334] "Generic (PLEG): container finished" podID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerID="6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8" exitCode=0 Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.185045 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-496db" event={"ID":"d1dcf3bd-86e8-4d51-a5db-66ecc7149500","Type":"ContainerDied","Data":"6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8"} Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.185058 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-496db" Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.185093 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-496db" event={"ID":"d1dcf3bd-86e8-4d51-a5db-66ecc7149500","Type":"ContainerDied","Data":"ffdf20a76b76b240ae8ffbccfbca6e1fe519f1638ef0dc9648f27ab1692e7675"} Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.185114 4728 scope.go:117] "RemoveContainer" containerID="6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8" Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.205615 4728 scope.go:117] "RemoveContainer" containerID="468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a" Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.243720 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-496db"] Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.253956 4728 scope.go:117] "RemoveContainer" containerID="f5c13e1522e06a5af04b7d89b8de269e822c7b7855b182fcb0fba672a3ad0d2e" Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.259426 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-496db"] Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.284175 4728 scope.go:117] "RemoveContainer" containerID="6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8" Feb 04 12:23:34 crc kubenswrapper[4728]: E0204 12:23:34.284683 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8\": container with ID starting with 6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8 not found: ID does not exist" containerID="6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8" Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.284720 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8"} err="failed to get container status \"6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8\": rpc error: code = NotFound desc = could not find container \"6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8\": container with ID starting with 6f2bc0b0789d744b85544fa6afcd4283de8564672aa37ffa155118a4e61b60b8 not found: ID does not exist" Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.284743 4728 scope.go:117] "RemoveContainer" containerID="468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a" Feb 04 12:23:34 crc kubenswrapper[4728]: E0204 12:23:34.285148 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a\": container with ID starting with 468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a not found: ID does not exist" containerID="468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a" Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.285189 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a"} err="failed to get container status \"468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a\": rpc error: code = NotFound desc = could not find container \"468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a\": container with ID starting with 468bb17e77b5c44c51d9b2ff32c765169b3399216ce6438a7f3ffd0bf4d4645a not found: ID does not exist" Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.285223 4728 scope.go:117] "RemoveContainer" containerID="f5c13e1522e06a5af04b7d89b8de269e822c7b7855b182fcb0fba672a3ad0d2e" Feb 04 12:23:34 crc kubenswrapper[4728]: E0204 12:23:34.285525 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c13e1522e06a5af04b7d89b8de269e822c7b7855b182fcb0fba672a3ad0d2e\": container with ID starting with f5c13e1522e06a5af04b7d89b8de269e822c7b7855b182fcb0fba672a3ad0d2e not found: ID does not exist" containerID="f5c13e1522e06a5af04b7d89b8de269e822c7b7855b182fcb0fba672a3ad0d2e" Feb 04 12:23:34 crc kubenswrapper[4728]: I0204 12:23:34.285549 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c13e1522e06a5af04b7d89b8de269e822c7b7855b182fcb0fba672a3ad0d2e"} err="failed to get container status \"f5c13e1522e06a5af04b7d89b8de269e822c7b7855b182fcb0fba672a3ad0d2e\": rpc error: code = NotFound desc = could not find container \"f5c13e1522e06a5af04b7d89b8de269e822c7b7855b182fcb0fba672a3ad0d2e\": container with ID starting with f5c13e1522e06a5af04b7d89b8de269e822c7b7855b182fcb0fba672a3ad0d2e not found: ID does not exist" Feb 04 12:23:35 crc kubenswrapper[4728]: I0204 12:23:35.449074 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:23:35 crc kubenswrapper[4728]: I0204 12:23:35.449444 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:23:35 crc kubenswrapper[4728]: I0204 12:23:35.566699 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" path="/var/lib/kubelet/pods/d1dcf3bd-86e8-4d51-a5db-66ecc7149500/volumes" Feb 04 12:23:37 crc kubenswrapper[4728]: I0204 12:23:37.020128 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:37 crc kubenswrapper[4728]: I0204 12:23:37.028430 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:37 crc kubenswrapper[4728]: I0204 12:23:37.219466 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.138200 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpn5"] Feb 04 12:23:43 crc kubenswrapper[4728]: E0204 12:23:43.139183 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerName="extract-content" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.139204 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerName="extract-content" Feb 04 12:23:43 crc kubenswrapper[4728]: E0204 12:23:43.139234 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerName="registry-server" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.139241 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerName="registry-server" Feb 04 12:23:43 crc kubenswrapper[4728]: E0204 12:23:43.139261 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerName="extract-utilities" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.139267 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerName="extract-utilities" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.139440 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1dcf3bd-86e8-4d51-a5db-66ecc7149500" containerName="registry-server" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.141020 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.156739 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpn5"] Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.189999 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-catalog-content\") pod \"redhat-marketplace-2qpn5\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.190046 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-utilities\") pod \"redhat-marketplace-2qpn5\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.192997 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtkql\" (UniqueName: \"kubernetes.io/projected/d79df875-0631-452d-8753-b180a5e7ba67-kube-api-access-gtkql\") pod \"redhat-marketplace-2qpn5\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.294626 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtkql\" (UniqueName: \"kubernetes.io/projected/d79df875-0631-452d-8753-b180a5e7ba67-kube-api-access-gtkql\") pod \"redhat-marketplace-2qpn5\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.294885 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-catalog-content\") pod \"redhat-marketplace-2qpn5\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.294917 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-utilities\") pod \"redhat-marketplace-2qpn5\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.296908 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-catalog-content\") pod \"redhat-marketplace-2qpn5\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.297855 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-utilities\") pod \"redhat-marketplace-2qpn5\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.328941 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtkql\" (UniqueName: \"kubernetes.io/projected/d79df875-0631-452d-8753-b180a5e7ba67-kube-api-access-gtkql\") pod \"redhat-marketplace-2qpn5\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:43 crc kubenswrapper[4728]: I0204 12:23:43.497249 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:44 crc kubenswrapper[4728]: I0204 12:23:44.018865 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpn5"] Feb 04 12:23:44 crc kubenswrapper[4728]: I0204 12:23:44.273342 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpn5" event={"ID":"d79df875-0631-452d-8753-b180a5e7ba67","Type":"ContainerStarted","Data":"73e3591b8b68d33a4f1f1aa0699bd7e52b09029fe25e61f555dd161f7dcc9802"} Feb 04 12:23:45 crc kubenswrapper[4728]: I0204 12:23:45.283685 4728 generic.go:334] "Generic (PLEG): container finished" podID="d79df875-0631-452d-8753-b180a5e7ba67" containerID="f6f03d86717ef5d8824af81b7e6266a6bcbbdd7d45296f99de3eeefa3a6db341" exitCode=0 Feb 04 12:23:45 crc kubenswrapper[4728]: I0204 12:23:45.283726 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpn5" event={"ID":"d79df875-0631-452d-8753-b180a5e7ba67","Type":"ContainerDied","Data":"f6f03d86717ef5d8824af81b7e6266a6bcbbdd7d45296f99de3eeefa3a6db341"} Feb 04 12:23:46 crc kubenswrapper[4728]: I0204 12:23:46.297117 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpn5" event={"ID":"d79df875-0631-452d-8753-b180a5e7ba67","Type":"ContainerStarted","Data":"66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c"} Feb 04 12:23:47 crc kubenswrapper[4728]: I0204 12:23:47.307632 4728 generic.go:334] "Generic (PLEG): container finished" podID="d79df875-0631-452d-8753-b180a5e7ba67" containerID="66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c" exitCode=0 Feb 04 12:23:47 crc kubenswrapper[4728]: I0204 12:23:47.307674 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpn5" event={"ID":"d79df875-0631-452d-8753-b180a5e7ba67","Type":"ContainerDied","Data":"66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c"} Feb 04 12:23:48 crc kubenswrapper[4728]: I0204 12:23:48.326624 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpn5" event={"ID":"d79df875-0631-452d-8753-b180a5e7ba67","Type":"ContainerStarted","Data":"9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6"} Feb 04 12:23:48 crc kubenswrapper[4728]: I0204 12:23:48.347171 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2qpn5" podStartSLOduration=2.884387497 podStartE2EDuration="5.347153945s" podCreationTimestamp="2026-02-04 12:23:43 +0000 UTC" firstStartedPulling="2026-02-04 12:23:45.286143882 +0000 UTC m=+3374.428848267" lastFinishedPulling="2026-02-04 12:23:47.74891033 +0000 UTC m=+3376.891614715" observedRunningTime="2026-02-04 12:23:48.344839018 +0000 UTC m=+3377.487543403" watchObservedRunningTime="2026-02-04 12:23:48.347153945 +0000 UTC m=+3377.489858330" Feb 04 12:23:53 crc kubenswrapper[4728]: I0204 12:23:53.498187 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:53 crc kubenswrapper[4728]: I0204 12:23:53.498950 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:53 crc kubenswrapper[4728]: I0204 12:23:53.549724 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:54 crc kubenswrapper[4728]: I0204 12:23:54.429661 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:54 crc kubenswrapper[4728]: I0204 12:23:54.486080 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpn5"] Feb 04 12:23:56 crc kubenswrapper[4728]: I0204 12:23:56.398985 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2qpn5" podUID="d79df875-0631-452d-8753-b180a5e7ba67" containerName="registry-server" containerID="cri-o://9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6" gracePeriod=2 Feb 04 12:23:56 crc kubenswrapper[4728]: I0204 12:23:56.930396 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:56 crc kubenswrapper[4728]: I0204 12:23:56.971533 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtkql\" (UniqueName: \"kubernetes.io/projected/d79df875-0631-452d-8753-b180a5e7ba67-kube-api-access-gtkql\") pod \"d79df875-0631-452d-8753-b180a5e7ba67\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " Feb 04 12:23:56 crc kubenswrapper[4728]: I0204 12:23:56.971695 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-catalog-content\") pod \"d79df875-0631-452d-8753-b180a5e7ba67\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " Feb 04 12:23:56 crc kubenswrapper[4728]: I0204 12:23:56.971833 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-utilities\") pod \"d79df875-0631-452d-8753-b180a5e7ba67\" (UID: \"d79df875-0631-452d-8753-b180a5e7ba67\") " Feb 04 12:23:56 crc kubenswrapper[4728]: I0204 12:23:56.973144 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-utilities" (OuterVolumeSpecName: "utilities") pod "d79df875-0631-452d-8753-b180a5e7ba67" (UID: "d79df875-0631-452d-8753-b180a5e7ba67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:23:56 crc kubenswrapper[4728]: I0204 12:23:56.977878 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d79df875-0631-452d-8753-b180a5e7ba67-kube-api-access-gtkql" (OuterVolumeSpecName: "kube-api-access-gtkql") pod "d79df875-0631-452d-8753-b180a5e7ba67" (UID: "d79df875-0631-452d-8753-b180a5e7ba67"). InnerVolumeSpecName "kube-api-access-gtkql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.073996 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtkql\" (UniqueName: \"kubernetes.io/projected/d79df875-0631-452d-8753-b180a5e7ba67-kube-api-access-gtkql\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.074038 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.409472 4728 generic.go:334] "Generic (PLEG): container finished" podID="d79df875-0631-452d-8753-b180a5e7ba67" containerID="9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6" exitCode=0 Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.409522 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qpn5" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.409543 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpn5" event={"ID":"d79df875-0631-452d-8753-b180a5e7ba67","Type":"ContainerDied","Data":"9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6"} Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.409969 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpn5" event={"ID":"d79df875-0631-452d-8753-b180a5e7ba67","Type":"ContainerDied","Data":"73e3591b8b68d33a4f1f1aa0699bd7e52b09029fe25e61f555dd161f7dcc9802"} Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.410005 4728 scope.go:117] "RemoveContainer" containerID="9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.429408 4728 scope.go:117] "RemoveContainer" containerID="66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.461903 4728 scope.go:117] "RemoveContainer" containerID="f6f03d86717ef5d8824af81b7e6266a6bcbbdd7d45296f99de3eeefa3a6db341" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.512121 4728 scope.go:117] "RemoveContainer" containerID="9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6" Feb 04 12:23:57 crc kubenswrapper[4728]: E0204 12:23:57.512547 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6\": container with ID starting with 9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6 not found: ID does not exist" containerID="9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.512588 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6"} err="failed to get container status \"9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6\": rpc error: code = NotFound desc = could not find container \"9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6\": container with ID starting with 9b5b248f8032055eb57bd58401fc5547de97aa1fb20b5aaad93eac760ceb66c6 not found: ID does not exist" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.512615 4728 scope.go:117] "RemoveContainer" containerID="66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c" Feb 04 12:23:57 crc kubenswrapper[4728]: E0204 12:23:57.512872 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c\": container with ID starting with 66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c not found: ID does not exist" containerID="66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.512907 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c"} err="failed to get container status \"66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c\": rpc error: code = NotFound desc = could not find container \"66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c\": container with ID starting with 66ad985d1d1251ef310fceb0b30ba67f86afacd6c311a2e3334bef2206706d3c not found: ID does not exist" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.512923 4728 scope.go:117] "RemoveContainer" containerID="f6f03d86717ef5d8824af81b7e6266a6bcbbdd7d45296f99de3eeefa3a6db341" Feb 04 12:23:57 crc kubenswrapper[4728]: E0204 12:23:57.513196 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6f03d86717ef5d8824af81b7e6266a6bcbbdd7d45296f99de3eeefa3a6db341\": container with ID starting with f6f03d86717ef5d8824af81b7e6266a6bcbbdd7d45296f99de3eeefa3a6db341 not found: ID does not exist" containerID="f6f03d86717ef5d8824af81b7e6266a6bcbbdd7d45296f99de3eeefa3a6db341" Feb 04 12:23:57 crc kubenswrapper[4728]: I0204 12:23:57.513233 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f03d86717ef5d8824af81b7e6266a6bcbbdd7d45296f99de3eeefa3a6db341"} err="failed to get container status \"f6f03d86717ef5d8824af81b7e6266a6bcbbdd7d45296f99de3eeefa3a6db341\": rpc error: code = NotFound desc = could not find container \"f6f03d86717ef5d8824af81b7e6266a6bcbbdd7d45296f99de3eeefa3a6db341\": container with ID starting with f6f03d86717ef5d8824af81b7e6266a6bcbbdd7d45296f99de3eeefa3a6db341 not found: ID does not exist" Feb 04 12:23:58 crc kubenswrapper[4728]: I0204 12:23:58.104684 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d79df875-0631-452d-8753-b180a5e7ba67" (UID: "d79df875-0631-452d-8753-b180a5e7ba67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:23:58 crc kubenswrapper[4728]: I0204 12:23:58.191623 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d79df875-0631-452d-8753-b180a5e7ba67-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:23:58 crc kubenswrapper[4728]: I0204 12:23:58.365077 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpn5"] Feb 04 12:23:58 crc kubenswrapper[4728]: I0204 12:23:58.380524 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpn5"] Feb 04 12:23:59 crc kubenswrapper[4728]: I0204 12:23:59.568552 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d79df875-0631-452d-8753-b180a5e7ba67" path="/var/lib/kubelet/pods/d79df875-0631-452d-8753-b180a5e7ba67/volumes" Feb 04 12:24:05 crc kubenswrapper[4728]: I0204 12:24:05.448953 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:24:05 crc kubenswrapper[4728]: I0204 12:24:05.449585 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:24:05 crc kubenswrapper[4728]: I0204 12:24:05.449679 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 12:24:05 crc kubenswrapper[4728]: I0204 12:24:05.450899 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c6acb80889046d92069b3298c3eed8cc5e6b53fb2b7561199c1feb4cd3c2597"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 12:24:05 crc kubenswrapper[4728]: I0204 12:24:05.451022 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://2c6acb80889046d92069b3298c3eed8cc5e6b53fb2b7561199c1feb4cd3c2597" gracePeriod=600 Feb 04 12:24:06 crc kubenswrapper[4728]: I0204 12:24:06.493034 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="2c6acb80889046d92069b3298c3eed8cc5e6b53fb2b7561199c1feb4cd3c2597" exitCode=0 Feb 04 12:24:06 crc kubenswrapper[4728]: I0204 12:24:06.493096 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"2c6acb80889046d92069b3298c3eed8cc5e6b53fb2b7561199c1feb4cd3c2597"} Feb 04 12:24:06 crc kubenswrapper[4728]: I0204 12:24:06.493639 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9"} Feb 04 12:24:06 crc kubenswrapper[4728]: I0204 12:24:06.493662 4728 scope.go:117] "RemoveContainer" containerID="09d18e222649cf0d665c39e3df506201f942d239d8f841b656fce34adee7a409" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.663769 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9mg9l"] Feb 04 12:24:19 crc kubenswrapper[4728]: E0204 12:24:19.664774 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79df875-0631-452d-8753-b180a5e7ba67" containerName="extract-content" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.664791 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79df875-0631-452d-8753-b180a5e7ba67" containerName="extract-content" Feb 04 12:24:19 crc kubenswrapper[4728]: E0204 12:24:19.664812 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79df875-0631-452d-8753-b180a5e7ba67" containerName="registry-server" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.664820 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79df875-0631-452d-8753-b180a5e7ba67" containerName="registry-server" Feb 04 12:24:19 crc kubenswrapper[4728]: E0204 12:24:19.664848 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79df875-0631-452d-8753-b180a5e7ba67" containerName="extract-utilities" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.664856 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79df875-0631-452d-8753-b180a5e7ba67" containerName="extract-utilities" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.665123 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79df875-0631-452d-8753-b180a5e7ba67" containerName="registry-server" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.666792 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.677329 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mg9l"] Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.783683 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-utilities\") pod \"community-operators-9mg9l\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.783742 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-catalog-content\") pod \"community-operators-9mg9l\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.783934 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2t4j\" (UniqueName: \"kubernetes.io/projected/008f1ee1-03f4-4f03-909d-5946c5f949a6-kube-api-access-f2t4j\") pod \"community-operators-9mg9l\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.885726 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-utilities\") pod \"community-operators-9mg9l\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.885874 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-catalog-content\") pod \"community-operators-9mg9l\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.886084 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2t4j\" (UniqueName: \"kubernetes.io/projected/008f1ee1-03f4-4f03-909d-5946c5f949a6-kube-api-access-f2t4j\") pod \"community-operators-9mg9l\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.886299 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-utilities\") pod \"community-operators-9mg9l\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.886470 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-catalog-content\") pod \"community-operators-9mg9l\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:19 crc kubenswrapper[4728]: I0204 12:24:19.910662 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2t4j\" (UniqueName: \"kubernetes.io/projected/008f1ee1-03f4-4f03-909d-5946c5f949a6-kube-api-access-f2t4j\") pod \"community-operators-9mg9l\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:20 crc kubenswrapper[4728]: I0204 12:24:20.001726 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:20 crc kubenswrapper[4728]: I0204 12:24:20.456488 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mg9l"] Feb 04 12:24:20 crc kubenswrapper[4728]: I0204 12:24:20.641448 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mg9l" event={"ID":"008f1ee1-03f4-4f03-909d-5946c5f949a6","Type":"ContainerStarted","Data":"ae10749f540377ccbc082d1ac51957d382bc656ac9a0e58959fd5c964a017cb8"} Feb 04 12:24:21 crc kubenswrapper[4728]: I0204 12:24:21.650821 4728 generic.go:334] "Generic (PLEG): container finished" podID="008f1ee1-03f4-4f03-909d-5946c5f949a6" containerID="1af65a37076f5f40bace7e74d8e28f16bb46ada51472128bface6b6c0766ccfb" exitCode=0 Feb 04 12:24:21 crc kubenswrapper[4728]: I0204 12:24:21.650977 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mg9l" event={"ID":"008f1ee1-03f4-4f03-909d-5946c5f949a6","Type":"ContainerDied","Data":"1af65a37076f5f40bace7e74d8e28f16bb46ada51472128bface6b6c0766ccfb"} Feb 04 12:24:21 crc kubenswrapper[4728]: I0204 12:24:21.655918 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 12:24:22 crc kubenswrapper[4728]: I0204 12:24:22.662169 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mg9l" event={"ID":"008f1ee1-03f4-4f03-909d-5946c5f949a6","Type":"ContainerStarted","Data":"58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087"} Feb 04 12:24:23 crc kubenswrapper[4728]: I0204 12:24:23.677337 4728 generic.go:334] "Generic (PLEG): container finished" podID="008f1ee1-03f4-4f03-909d-5946c5f949a6" containerID="58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087" exitCode=0 Feb 04 12:24:23 crc kubenswrapper[4728]: I0204 12:24:23.677493 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mg9l" event={"ID":"008f1ee1-03f4-4f03-909d-5946c5f949a6","Type":"ContainerDied","Data":"58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087"} Feb 04 12:24:24 crc kubenswrapper[4728]: I0204 12:24:24.688649 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mg9l" event={"ID":"008f1ee1-03f4-4f03-909d-5946c5f949a6","Type":"ContainerStarted","Data":"675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44"} Feb 04 12:24:24 crc kubenswrapper[4728]: I0204 12:24:24.716288 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9mg9l" podStartSLOduration=3.171634931 podStartE2EDuration="5.716266059s" podCreationTimestamp="2026-02-04 12:24:19 +0000 UTC" firstStartedPulling="2026-02-04 12:24:21.65543997 +0000 UTC m=+3410.798144365" lastFinishedPulling="2026-02-04 12:24:24.200071108 +0000 UTC m=+3413.342775493" observedRunningTime="2026-02-04 12:24:24.712400804 +0000 UTC m=+3413.855105189" watchObservedRunningTime="2026-02-04 12:24:24.716266059 +0000 UTC m=+3413.858970464" Feb 04 12:24:30 crc kubenswrapper[4728]: I0204 12:24:30.002797 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:30 crc kubenswrapper[4728]: I0204 12:24:30.003307 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:30 crc kubenswrapper[4728]: I0204 12:24:30.084281 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:30 crc kubenswrapper[4728]: I0204 12:24:30.792631 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:30 crc kubenswrapper[4728]: I0204 12:24:30.847726 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mg9l"] Feb 04 12:24:32 crc kubenswrapper[4728]: I0204 12:24:32.771432 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9mg9l" podUID="008f1ee1-03f4-4f03-909d-5946c5f949a6" containerName="registry-server" containerID="cri-o://675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44" gracePeriod=2 Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.236149 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.350984 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2t4j\" (UniqueName: \"kubernetes.io/projected/008f1ee1-03f4-4f03-909d-5946c5f949a6-kube-api-access-f2t4j\") pod \"008f1ee1-03f4-4f03-909d-5946c5f949a6\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.351043 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-catalog-content\") pod \"008f1ee1-03f4-4f03-909d-5946c5f949a6\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.351342 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-utilities\") pod \"008f1ee1-03f4-4f03-909d-5946c5f949a6\" (UID: \"008f1ee1-03f4-4f03-909d-5946c5f949a6\") " Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.352324 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-utilities" (OuterVolumeSpecName: "utilities") pod "008f1ee1-03f4-4f03-909d-5946c5f949a6" (UID: "008f1ee1-03f4-4f03-909d-5946c5f949a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.360041 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008f1ee1-03f4-4f03-909d-5946c5f949a6-kube-api-access-f2t4j" (OuterVolumeSpecName: "kube-api-access-f2t4j") pod "008f1ee1-03f4-4f03-909d-5946c5f949a6" (UID: "008f1ee1-03f4-4f03-909d-5946c5f949a6"). InnerVolumeSpecName "kube-api-access-f2t4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.416411 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "008f1ee1-03f4-4f03-909d-5946c5f949a6" (UID: "008f1ee1-03f4-4f03-909d-5946c5f949a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.453879 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.453952 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2t4j\" (UniqueName: \"kubernetes.io/projected/008f1ee1-03f4-4f03-909d-5946c5f949a6-kube-api-access-f2t4j\") on node \"crc\" DevicePath \"\"" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.453968 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008f1ee1-03f4-4f03-909d-5946c5f949a6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.782465 4728 generic.go:334] "Generic (PLEG): container finished" podID="008f1ee1-03f4-4f03-909d-5946c5f949a6" containerID="675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44" exitCode=0 Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.782487 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mg9l" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.782502 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mg9l" event={"ID":"008f1ee1-03f4-4f03-909d-5946c5f949a6","Type":"ContainerDied","Data":"675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44"} Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.783093 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mg9l" event={"ID":"008f1ee1-03f4-4f03-909d-5946c5f949a6","Type":"ContainerDied","Data":"ae10749f540377ccbc082d1ac51957d382bc656ac9a0e58959fd5c964a017cb8"} Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.783116 4728 scope.go:117] "RemoveContainer" containerID="675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.813178 4728 scope.go:117] "RemoveContainer" containerID="58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.813589 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mg9l"] Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.832908 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9mg9l"] Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.835479 4728 scope.go:117] "RemoveContainer" containerID="1af65a37076f5f40bace7e74d8e28f16bb46ada51472128bface6b6c0766ccfb" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.883270 4728 scope.go:117] "RemoveContainer" containerID="675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44" Feb 04 12:24:33 crc kubenswrapper[4728]: E0204 12:24:33.883674 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44\": container with ID starting with 675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44 not found: ID does not exist" containerID="675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.883719 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44"} err="failed to get container status \"675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44\": rpc error: code = NotFound desc = could not find container \"675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44\": container with ID starting with 675343d952fcf8a7fdeed5aec181076c68c7946fa2dce45337b9f7a5d1f65f44 not found: ID does not exist" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.883761 4728 scope.go:117] "RemoveContainer" containerID="58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087" Feb 04 12:24:33 crc kubenswrapper[4728]: E0204 12:24:33.884244 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087\": container with ID starting with 58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087 not found: ID does not exist" containerID="58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.884268 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087"} err="failed to get container status \"58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087\": rpc error: code = NotFound desc = could not find container \"58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087\": container with ID starting with 58281ede5e7bc914536fac851bacb6ebbe8405201f35c4cf565abf9bc7f40087 not found: ID does not exist" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.884283 4728 scope.go:117] "RemoveContainer" containerID="1af65a37076f5f40bace7e74d8e28f16bb46ada51472128bface6b6c0766ccfb" Feb 04 12:24:33 crc kubenswrapper[4728]: E0204 12:24:33.884553 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af65a37076f5f40bace7e74d8e28f16bb46ada51472128bface6b6c0766ccfb\": container with ID starting with 1af65a37076f5f40bace7e74d8e28f16bb46ada51472128bface6b6c0766ccfb not found: ID does not exist" containerID="1af65a37076f5f40bace7e74d8e28f16bb46ada51472128bface6b6c0766ccfb" Feb 04 12:24:33 crc kubenswrapper[4728]: I0204 12:24:33.884583 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af65a37076f5f40bace7e74d8e28f16bb46ada51472128bface6b6c0766ccfb"} err="failed to get container status \"1af65a37076f5f40bace7e74d8e28f16bb46ada51472128bface6b6c0766ccfb\": rpc error: code = NotFound desc = could not find container \"1af65a37076f5f40bace7e74d8e28f16bb46ada51472128bface6b6c0766ccfb\": container with ID starting with 1af65a37076f5f40bace7e74d8e28f16bb46ada51472128bface6b6c0766ccfb not found: ID does not exist" Feb 04 12:24:35 crc kubenswrapper[4728]: I0204 12:24:35.569228 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008f1ee1-03f4-4f03-909d-5946c5f949a6" path="/var/lib/kubelet/pods/008f1ee1-03f4-4f03-909d-5946c5f949a6/volumes" Feb 04 12:24:56 crc kubenswrapper[4728]: I0204 12:24:56.058437 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-7a49-account-create-update-rlk6x"] Feb 04 12:24:56 crc kubenswrapper[4728]: I0204 12:24:56.067212 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-64c95"] Feb 04 12:24:56 crc kubenswrapper[4728]: I0204 12:24:56.077306 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-64c95"] Feb 04 12:24:56 crc kubenswrapper[4728]: I0204 12:24:56.085604 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-7a49-account-create-update-rlk6x"] Feb 04 12:24:57 crc kubenswrapper[4728]: I0204 12:24:57.565377 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb1855f-7a6c-42af-ac76-95d78714517c" path="/var/lib/kubelet/pods/2cb1855f-7a6c-42af-ac76-95d78714517c/volumes" Feb 04 12:24:57 crc kubenswrapper[4728]: I0204 12:24:57.566159 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31b7dab7-5f37-4d0a-9606-965afa056370" path="/var/lib/kubelet/pods/31b7dab7-5f37-4d0a-9606-965afa056370/volumes" Feb 04 12:25:43 crc kubenswrapper[4728]: I0204 12:25:43.028312 4728 scope.go:117] "RemoveContainer" containerID="03808ff4a15f8b09151571b9ded8969060829c89287baad0a669654b385e8d94" Feb 04 12:25:43 crc kubenswrapper[4728]: I0204 12:25:43.062697 4728 scope.go:117] "RemoveContainer" containerID="ece34e87dc6a3d7bf3f2152635badcb89a85cb2cfbb5eb86f5f0b2808974ce49" Feb 04 12:26:05 crc kubenswrapper[4728]: I0204 12:26:05.448407 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:26:05 crc kubenswrapper[4728]: I0204 12:26:05.449105 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:26:35 crc kubenswrapper[4728]: I0204 12:26:35.448607 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:26:35 crc kubenswrapper[4728]: I0204 12:26:35.449208 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:27:03 crc kubenswrapper[4728]: I0204 12:27:03.844737 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6dcb54f59-lnlx2_3b49d7d8-7c63-482c-b882-25c01e798afe/manager/0.log" Feb 04 12:27:05 crc kubenswrapper[4728]: I0204 12:27:05.448648 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:27:05 crc kubenswrapper[4728]: I0204 12:27:05.449318 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:27:05 crc kubenswrapper[4728]: I0204 12:27:05.449409 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 12:27:05 crc kubenswrapper[4728]: I0204 12:27:05.450865 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 12:27:05 crc kubenswrapper[4728]: I0204 12:27:05.450960 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" gracePeriod=600 Feb 04 12:27:05 crc kubenswrapper[4728]: E0204 12:27:05.586679 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:27:06 crc kubenswrapper[4728]: I0204 12:27:06.357352 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" exitCode=0 Feb 04 12:27:06 crc kubenswrapper[4728]: I0204 12:27:06.357420 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9"} Feb 04 12:27:06 crc kubenswrapper[4728]: I0204 12:27:06.357705 4728 scope.go:117] "RemoveContainer" containerID="2c6acb80889046d92069b3298c3eed8cc5e6b53fb2b7561199c1feb4cd3c2597" Feb 04 12:27:06 crc kubenswrapper[4728]: I0204 12:27:06.358446 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:27:06 crc kubenswrapper[4728]: E0204 12:27:06.360867 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:27:21 crc kubenswrapper[4728]: I0204 12:27:21.563273 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:27:21 crc kubenswrapper[4728]: E0204 12:27:21.564150 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:27:33 crc kubenswrapper[4728]: I0204 12:27:33.553719 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:27:33 crc kubenswrapper[4728]: E0204 12:27:33.554486 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:27:45 crc kubenswrapper[4728]: I0204 12:27:45.553995 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:27:45 crc kubenswrapper[4728]: E0204 12:27:45.554674 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:27:58 crc kubenswrapper[4728]: I0204 12:27:58.554177 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:27:58 crc kubenswrapper[4728]: E0204 12:27:58.554964 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:28:12 crc kubenswrapper[4728]: I0204 12:28:12.554470 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:28:12 crc kubenswrapper[4728]: E0204 12:28:12.555325 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:28:27 crc kubenswrapper[4728]: I0204 12:28:27.555014 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:28:27 crc kubenswrapper[4728]: E0204 12:28:27.555987 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:28:39 crc kubenswrapper[4728]: I0204 12:28:39.554244 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:28:39 crc kubenswrapper[4728]: E0204 12:28:39.555111 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:28:52 crc kubenswrapper[4728]: I0204 12:28:52.554298 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:28:52 crc kubenswrapper[4728]: E0204 12:28:52.555281 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:29:04 crc kubenswrapper[4728]: I0204 12:29:04.554063 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:29:04 crc kubenswrapper[4728]: E0204 12:29:04.554784 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:29:18 crc kubenswrapper[4728]: I0204 12:29:18.554238 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:29:18 crc kubenswrapper[4728]: E0204 12:29:18.555085 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.393393 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s9c52"] Feb 04 12:29:25 crc kubenswrapper[4728]: E0204 12:29:25.394354 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008f1ee1-03f4-4f03-909d-5946c5f949a6" containerName="registry-server" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.394368 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="008f1ee1-03f4-4f03-909d-5946c5f949a6" containerName="registry-server" Feb 04 12:29:25 crc kubenswrapper[4728]: E0204 12:29:25.394379 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008f1ee1-03f4-4f03-909d-5946c5f949a6" containerName="extract-utilities" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.394387 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="008f1ee1-03f4-4f03-909d-5946c5f949a6" containerName="extract-utilities" Feb 04 12:29:25 crc kubenswrapper[4728]: E0204 12:29:25.394416 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008f1ee1-03f4-4f03-909d-5946c5f949a6" containerName="extract-content" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.394423 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="008f1ee1-03f4-4f03-909d-5946c5f949a6" containerName="extract-content" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.394629 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="008f1ee1-03f4-4f03-909d-5946c5f949a6" containerName="registry-server" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.396119 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.421673 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9c52"] Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.423411 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-catalog-content\") pod \"certified-operators-s9c52\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.423536 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n9tg\" (UniqueName: \"kubernetes.io/projected/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-kube-api-access-2n9tg\") pod \"certified-operators-s9c52\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.423598 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-utilities\") pod \"certified-operators-s9c52\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.525453 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-catalog-content\") pod \"certified-operators-s9c52\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.525562 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n9tg\" (UniqueName: \"kubernetes.io/projected/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-kube-api-access-2n9tg\") pod \"certified-operators-s9c52\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.525604 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-utilities\") pod \"certified-operators-s9c52\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.526472 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-utilities\") pod \"certified-operators-s9c52\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.526490 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-catalog-content\") pod \"certified-operators-s9c52\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.546226 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n9tg\" (UniqueName: \"kubernetes.io/projected/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-kube-api-access-2n9tg\") pod \"certified-operators-s9c52\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:25 crc kubenswrapper[4728]: I0204 12:29:25.768459 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:26 crc kubenswrapper[4728]: I0204 12:29:26.287869 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9c52"] Feb 04 12:29:26 crc kubenswrapper[4728]: I0204 12:29:26.766424 4728 generic.go:334] "Generic (PLEG): container finished" podID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" containerID="5b2e2d66f79d342249f5e2a88d9d20271574687641fd636e4654d63dfca3722f" exitCode=0 Feb 04 12:29:26 crc kubenswrapper[4728]: I0204 12:29:26.766484 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9c52" event={"ID":"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7","Type":"ContainerDied","Data":"5b2e2d66f79d342249f5e2a88d9d20271574687641fd636e4654d63dfca3722f"} Feb 04 12:29:26 crc kubenswrapper[4728]: I0204 12:29:26.767203 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9c52" event={"ID":"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7","Type":"ContainerStarted","Data":"0bd18d27f137a2716778e22bd79be427973a27b3f630898d6b275c181de8af22"} Feb 04 12:29:26 crc kubenswrapper[4728]: I0204 12:29:26.770138 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 12:29:28 crc kubenswrapper[4728]: I0204 12:29:28.787150 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9c52" event={"ID":"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7","Type":"ContainerStarted","Data":"35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d"} Feb 04 12:29:31 crc kubenswrapper[4728]: I0204 12:29:31.816022 4728 generic.go:334] "Generic (PLEG): container finished" podID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" containerID="35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d" exitCode=0 Feb 04 12:29:31 crc kubenswrapper[4728]: I0204 12:29:31.816108 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9c52" event={"ID":"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7","Type":"ContainerDied","Data":"35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d"} Feb 04 12:29:33 crc kubenswrapper[4728]: I0204 12:29:33.554109 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:29:33 crc kubenswrapper[4728]: E0204 12:29:33.554360 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:29:33 crc kubenswrapper[4728]: I0204 12:29:33.840297 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9c52" event={"ID":"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7","Type":"ContainerStarted","Data":"a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d"} Feb 04 12:29:35 crc kubenswrapper[4728]: I0204 12:29:35.769535 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:35 crc kubenswrapper[4728]: I0204 12:29:35.769596 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:35 crc kubenswrapper[4728]: I0204 12:29:35.823788 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:35 crc kubenswrapper[4728]: I0204 12:29:35.847426 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s9c52" podStartSLOduration=4.881715853 podStartE2EDuration="10.847405197s" podCreationTimestamp="2026-02-04 12:29:25 +0000 UTC" firstStartedPulling="2026-02-04 12:29:26.769458524 +0000 UTC m=+3715.912162959" lastFinishedPulling="2026-02-04 12:29:32.735147928 +0000 UTC m=+3721.877852303" observedRunningTime="2026-02-04 12:29:33.868856023 +0000 UTC m=+3723.011560418" watchObservedRunningTime="2026-02-04 12:29:35.847405197 +0000 UTC m=+3724.990109582" Feb 04 12:29:45 crc kubenswrapper[4728]: I0204 12:29:45.554831 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:29:45 crc kubenswrapper[4728]: E0204 12:29:45.555629 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:29:45 crc kubenswrapper[4728]: I0204 12:29:45.825015 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:45 crc kubenswrapper[4728]: I0204 12:29:45.887426 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9c52"] Feb 04 12:29:45 crc kubenswrapper[4728]: I0204 12:29:45.975265 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s9c52" podUID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" containerName="registry-server" containerID="cri-o://a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d" gracePeriod=2 Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.433540 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.625512 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-utilities\") pod \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.625633 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n9tg\" (UniqueName: \"kubernetes.io/projected/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-kube-api-access-2n9tg\") pod \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.625701 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-catalog-content\") pod \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\" (UID: \"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7\") " Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.627361 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-utilities" (OuterVolumeSpecName: "utilities") pod "b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" (UID: "b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.631993 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-kube-api-access-2n9tg" (OuterVolumeSpecName: "kube-api-access-2n9tg") pod "b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" (UID: "b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7"). InnerVolumeSpecName "kube-api-access-2n9tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.682799 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" (UID: "b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.727730 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.727783 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n9tg\" (UniqueName: \"kubernetes.io/projected/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-kube-api-access-2n9tg\") on node \"crc\" DevicePath \"\"" Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.727794 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.987237 4728 generic.go:334] "Generic (PLEG): container finished" podID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" containerID="a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d" exitCode=0 Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.987289 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9c52" event={"ID":"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7","Type":"ContainerDied","Data":"a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d"} Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.987348 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9c52" event={"ID":"b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7","Type":"ContainerDied","Data":"0bd18d27f137a2716778e22bd79be427973a27b3f630898d6b275c181de8af22"} Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.987349 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9c52" Feb 04 12:29:46 crc kubenswrapper[4728]: I0204 12:29:46.987368 4728 scope.go:117] "RemoveContainer" containerID="a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d" Feb 04 12:29:47 crc kubenswrapper[4728]: I0204 12:29:47.009152 4728 scope.go:117] "RemoveContainer" containerID="35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d" Feb 04 12:29:47 crc kubenswrapper[4728]: I0204 12:29:47.033269 4728 scope.go:117] "RemoveContainer" containerID="5b2e2d66f79d342249f5e2a88d9d20271574687641fd636e4654d63dfca3722f" Feb 04 12:29:47 crc kubenswrapper[4728]: I0204 12:29:47.036087 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9c52"] Feb 04 12:29:47 crc kubenswrapper[4728]: I0204 12:29:47.052463 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s9c52"] Feb 04 12:29:47 crc kubenswrapper[4728]: I0204 12:29:47.074148 4728 scope.go:117] "RemoveContainer" containerID="a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d" Feb 04 12:29:47 crc kubenswrapper[4728]: E0204 12:29:47.074524 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d\": container with ID starting with a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d not found: ID does not exist" containerID="a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d" Feb 04 12:29:47 crc kubenswrapper[4728]: I0204 12:29:47.074571 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d"} err="failed to get container status \"a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d\": rpc error: code = NotFound desc = could not find container \"a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d\": container with ID starting with a699794083db90a0e8a128e5d8882b90eeac279b4910474039601f04f0bf8a4d not found: ID does not exist" Feb 04 12:29:47 crc kubenswrapper[4728]: I0204 12:29:47.074599 4728 scope.go:117] "RemoveContainer" containerID="35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d" Feb 04 12:29:47 crc kubenswrapper[4728]: E0204 12:29:47.074815 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d\": container with ID starting with 35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d not found: ID does not exist" containerID="35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d" Feb 04 12:29:47 crc kubenswrapper[4728]: I0204 12:29:47.074843 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d"} err="failed to get container status \"35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d\": rpc error: code = NotFound desc = could not find container \"35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d\": container with ID starting with 35bd00b14e50cada9e1df5d913ea8c9a866adf295077da81c0837a9c58bed54d not found: ID does not exist" Feb 04 12:29:47 crc kubenswrapper[4728]: I0204 12:29:47.074858 4728 scope.go:117] "RemoveContainer" containerID="5b2e2d66f79d342249f5e2a88d9d20271574687641fd636e4654d63dfca3722f" Feb 04 12:29:47 crc kubenswrapper[4728]: E0204 12:29:47.075094 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2e2d66f79d342249f5e2a88d9d20271574687641fd636e4654d63dfca3722f\": container with ID starting with 5b2e2d66f79d342249f5e2a88d9d20271574687641fd636e4654d63dfca3722f not found: ID does not exist" containerID="5b2e2d66f79d342249f5e2a88d9d20271574687641fd636e4654d63dfca3722f" Feb 04 12:29:47 crc kubenswrapper[4728]: I0204 12:29:47.075140 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2e2d66f79d342249f5e2a88d9d20271574687641fd636e4654d63dfca3722f"} err="failed to get container status \"5b2e2d66f79d342249f5e2a88d9d20271574687641fd636e4654d63dfca3722f\": rpc error: code = NotFound desc = could not find container \"5b2e2d66f79d342249f5e2a88d9d20271574687641fd636e4654d63dfca3722f\": container with ID starting with 5b2e2d66f79d342249f5e2a88d9d20271574687641fd636e4654d63dfca3722f not found: ID does not exist" Feb 04 12:29:47 crc kubenswrapper[4728]: I0204 12:29:47.570589 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" path="/var/lib/kubelet/pods/b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7/volumes" Feb 04 12:29:56 crc kubenswrapper[4728]: I0204 12:29:56.554640 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:29:56 crc kubenswrapper[4728]: E0204 12:29:56.555409 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.187212 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7"] Feb 04 12:30:00 crc kubenswrapper[4728]: E0204 12:30:00.188362 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" containerName="extract-utilities" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.188639 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" containerName="extract-utilities" Feb 04 12:30:00 crc kubenswrapper[4728]: E0204 12:30:00.188669 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" containerName="extract-content" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.188677 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" containerName="extract-content" Feb 04 12:30:00 crc kubenswrapper[4728]: E0204 12:30:00.188718 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" containerName="registry-server" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.188727 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" containerName="registry-server" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.188993 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b515c6ea-bbbc-45e4-96cf-f0b452dcf4f7" containerName="registry-server" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.189815 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.197824 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7"] Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.212792 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6djd5\" (UniqueName: \"kubernetes.io/projected/ffe2de05-304c-40c1-b022-49d46572fd38-kube-api-access-6djd5\") pod \"collect-profiles-29503470-rvmw7\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.212894 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffe2de05-304c-40c1-b022-49d46572fd38-secret-volume\") pod \"collect-profiles-29503470-rvmw7\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.213008 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffe2de05-304c-40c1-b022-49d46572fd38-config-volume\") pod \"collect-profiles-29503470-rvmw7\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.214910 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.215101 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.314461 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6djd5\" (UniqueName: \"kubernetes.io/projected/ffe2de05-304c-40c1-b022-49d46572fd38-kube-api-access-6djd5\") pod \"collect-profiles-29503470-rvmw7\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.314535 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffe2de05-304c-40c1-b022-49d46572fd38-secret-volume\") pod \"collect-profiles-29503470-rvmw7\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.314632 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffe2de05-304c-40c1-b022-49d46572fd38-config-volume\") pod \"collect-profiles-29503470-rvmw7\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.315523 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffe2de05-304c-40c1-b022-49d46572fd38-config-volume\") pod \"collect-profiles-29503470-rvmw7\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.869041 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6djd5\" (UniqueName: \"kubernetes.io/projected/ffe2de05-304c-40c1-b022-49d46572fd38-kube-api-access-6djd5\") pod \"collect-profiles-29503470-rvmw7\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:00 crc kubenswrapper[4728]: I0204 12:30:00.880344 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffe2de05-304c-40c1-b022-49d46572fd38-secret-volume\") pod \"collect-profiles-29503470-rvmw7\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:01 crc kubenswrapper[4728]: I0204 12:30:01.140892 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:01 crc kubenswrapper[4728]: I0204 12:30:01.580917 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7"] Feb 04 12:30:02 crc kubenswrapper[4728]: I0204 12:30:02.124429 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" event={"ID":"ffe2de05-304c-40c1-b022-49d46572fd38","Type":"ContainerStarted","Data":"d71f880cf59c1327d0059100efb4a469fe109723ba0061cf7cd0c397d82f7e06"} Feb 04 12:30:02 crc kubenswrapper[4728]: I0204 12:30:02.124742 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" event={"ID":"ffe2de05-304c-40c1-b022-49d46572fd38","Type":"ContainerStarted","Data":"be590a478307ef7d253e99a3c7611d99495ae9bc54afda9a241fbd7167aa7c7c"} Feb 04 12:30:02 crc kubenswrapper[4728]: I0204 12:30:02.168087 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" podStartSLOduration=2.16806643 podStartE2EDuration="2.16806643s" podCreationTimestamp="2026-02-04 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 12:30:02.162165927 +0000 UTC m=+3751.304870332" watchObservedRunningTime="2026-02-04 12:30:02.16806643 +0000 UTC m=+3751.310770815" Feb 04 12:30:03 crc kubenswrapper[4728]: I0204 12:30:03.133817 4728 generic.go:334] "Generic (PLEG): container finished" podID="ffe2de05-304c-40c1-b022-49d46572fd38" containerID="d71f880cf59c1327d0059100efb4a469fe109723ba0061cf7cd0c397d82f7e06" exitCode=0 Feb 04 12:30:03 crc kubenswrapper[4728]: I0204 12:30:03.134129 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" event={"ID":"ffe2de05-304c-40c1-b022-49d46572fd38","Type":"ContainerDied","Data":"d71f880cf59c1327d0059100efb4a469fe109723ba0061cf7cd0c397d82f7e06"} Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.502929 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.659395 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv"] Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.671503 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503425-5w4qv"] Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.698823 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffe2de05-304c-40c1-b022-49d46572fd38-config-volume\") pod \"ffe2de05-304c-40c1-b022-49d46572fd38\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.699194 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6djd5\" (UniqueName: \"kubernetes.io/projected/ffe2de05-304c-40c1-b022-49d46572fd38-kube-api-access-6djd5\") pod \"ffe2de05-304c-40c1-b022-49d46572fd38\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.699302 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffe2de05-304c-40c1-b022-49d46572fd38-secret-volume\") pod \"ffe2de05-304c-40c1-b022-49d46572fd38\" (UID: \"ffe2de05-304c-40c1-b022-49d46572fd38\") " Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.699785 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe2de05-304c-40c1-b022-49d46572fd38-config-volume" (OuterVolumeSpecName: "config-volume") pod "ffe2de05-304c-40c1-b022-49d46572fd38" (UID: "ffe2de05-304c-40c1-b022-49d46572fd38"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.704993 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe2de05-304c-40c1-b022-49d46572fd38-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ffe2de05-304c-40c1-b022-49d46572fd38" (UID: "ffe2de05-304c-40c1-b022-49d46572fd38"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.709226 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe2de05-304c-40c1-b022-49d46572fd38-kube-api-access-6djd5" (OuterVolumeSpecName: "kube-api-access-6djd5") pod "ffe2de05-304c-40c1-b022-49d46572fd38" (UID: "ffe2de05-304c-40c1-b022-49d46572fd38"). InnerVolumeSpecName "kube-api-access-6djd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.801120 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffe2de05-304c-40c1-b022-49d46572fd38-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.801160 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6djd5\" (UniqueName: \"kubernetes.io/projected/ffe2de05-304c-40c1-b022-49d46572fd38-kube-api-access-6djd5\") on node \"crc\" DevicePath \"\"" Feb 04 12:30:04 crc kubenswrapper[4728]: I0204 12:30:04.801174 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffe2de05-304c-40c1-b022-49d46572fd38-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 12:30:05 crc kubenswrapper[4728]: I0204 12:30:05.151607 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" event={"ID":"ffe2de05-304c-40c1-b022-49d46572fd38","Type":"ContainerDied","Data":"be590a478307ef7d253e99a3c7611d99495ae9bc54afda9a241fbd7167aa7c7c"} Feb 04 12:30:05 crc kubenswrapper[4728]: I0204 12:30:05.151969 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be590a478307ef7d253e99a3c7611d99495ae9bc54afda9a241fbd7167aa7c7c" Feb 04 12:30:05 crc kubenswrapper[4728]: I0204 12:30:05.151652 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503470-rvmw7" Feb 04 12:30:05 crc kubenswrapper[4728]: I0204 12:30:05.569281 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7aba07-22c7-451d-840c-92f8899544cd" path="/var/lib/kubelet/pods/bf7aba07-22c7-451d-840c-92f8899544cd/volumes" Feb 04 12:30:09 crc kubenswrapper[4728]: I0204 12:30:09.554476 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:30:09 crc kubenswrapper[4728]: E0204 12:30:09.555426 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:30:24 crc kubenswrapper[4728]: I0204 12:30:24.554036 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:30:24 crc kubenswrapper[4728]: E0204 12:30:24.554735 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:30:37 crc kubenswrapper[4728]: I0204 12:30:37.553882 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:30:37 crc kubenswrapper[4728]: E0204 12:30:37.554646 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:30:43 crc kubenswrapper[4728]: I0204 12:30:43.222080 4728 scope.go:117] "RemoveContainer" containerID="25519d7fdf75943f48aef36004d7b1c9897a1b9149d175fc9c157cbfdbd9887c" Feb 04 12:30:49 crc kubenswrapper[4728]: I0204 12:30:49.554720 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:30:49 crc kubenswrapper[4728]: E0204 12:30:49.555658 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:31:01 crc kubenswrapper[4728]: I0204 12:31:01.562941 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:31:01 crc kubenswrapper[4728]: E0204 12:31:01.563725 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:31:05 crc kubenswrapper[4728]: I0204 12:31:05.442295 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6dcb54f59-lnlx2_3b49d7d8-7c63-482c-b882-25c01e798afe/manager/0.log" Feb 04 12:31:08 crc kubenswrapper[4728]: I0204 12:31:08.905740 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:31:08 crc kubenswrapper[4728]: I0204 12:31:08.906411 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="prometheus" containerID="cri-o://57bb2acba274931978eb2609877b0a41953c42c5b17804041cab34eaaa1f4356" gracePeriod=600 Feb 04 12:31:08 crc kubenswrapper[4728]: I0204 12:31:08.906945 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="thanos-sidecar" containerID="cri-o://b10b1ea8ec2398f79d1b3f41651b1b62231ff73a787bb0e6fede2edf6736f632" gracePeriod=600 Feb 04 12:31:08 crc kubenswrapper[4728]: I0204 12:31:08.907004 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="config-reloader" containerID="cri-o://12d6578e3e1eb4c34cfffefc544de6afb9af804cfbe32f9da79824b5225546a5" gracePeriod=600 Feb 04 12:31:09 crc kubenswrapper[4728]: I0204 12:31:09.775917 4728 generic.go:334] "Generic (PLEG): container finished" podID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerID="b10b1ea8ec2398f79d1b3f41651b1b62231ff73a787bb0e6fede2edf6736f632" exitCode=0 Feb 04 12:31:09 crc kubenswrapper[4728]: I0204 12:31:09.776604 4728 generic.go:334] "Generic (PLEG): container finished" podID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerID="12d6578e3e1eb4c34cfffefc544de6afb9af804cfbe32f9da79824b5225546a5" exitCode=0 Feb 04 12:31:09 crc kubenswrapper[4728]: I0204 12:31:09.776675 4728 generic.go:334] "Generic (PLEG): container finished" podID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerID="57bb2acba274931978eb2609877b0a41953c42c5b17804041cab34eaaa1f4356" exitCode=0 Feb 04 12:31:09 crc kubenswrapper[4728]: I0204 12:31:09.776002 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3","Type":"ContainerDied","Data":"b10b1ea8ec2398f79d1b3f41651b1b62231ff73a787bb0e6fede2edf6736f632"} Feb 04 12:31:09 crc kubenswrapper[4728]: I0204 12:31:09.776835 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3","Type":"ContainerDied","Data":"12d6578e3e1eb4c34cfffefc544de6afb9af804cfbe32f9da79824b5225546a5"} Feb 04 12:31:09 crc kubenswrapper[4728]: I0204 12:31:09.776902 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3","Type":"ContainerDied","Data":"57bb2acba274931978eb2609877b0a41953c42c5b17804041cab34eaaa1f4356"} Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.023623 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145330 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145406 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-thanos-prometheus-http-client-file\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145450 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-tls-assets\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145486 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-db\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145523 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-0\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145567 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-2\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145618 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145654 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rx79\" (UniqueName: \"kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-kube-api-access-6rx79\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145730 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-1\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145854 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-secret-combined-ca-bundle\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145900 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145937 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config-out\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.145969 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\" (UID: \"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3\") " Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.146130 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.146272 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.146389 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.146682 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-db" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "prometheus-metric-storage-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.147115 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-db\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.147141 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.147152 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.147162 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.158126 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config-out" (OuterVolumeSpecName: "config-out") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.158137 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.158193 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-kube-api-access-6rx79" (OuterVolumeSpecName: "kube-api-access-6rx79") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "kube-api-access-6rx79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.158228 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.158342 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.158998 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config" (OuterVolumeSpecName: "config") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.166837 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.167912 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.227002 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config" (OuterVolumeSpecName: "web-config") pod "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" (UID: "9d5066fe-5f1f-4b22-88f6-3515d0a16cc3"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.248938 4728 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.249208 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.249295 4728 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-config-out\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.249402 4728 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.249491 4728 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.249594 4728 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.249672 4728 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.249746 4728 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.249856 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rx79\" (UniqueName: \"kubernetes.io/projected/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3-kube-api-access-6rx79\") on node \"crc\" DevicePath \"\"" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.803108 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d5066fe-5f1f-4b22-88f6-3515d0a16cc3","Type":"ContainerDied","Data":"29f22e104230e60fbf5ac1548bcbf6b62e786ee2ca96c04dd9a023c0b623df8c"} Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.803164 4728 scope.go:117] "RemoveContainer" containerID="b10b1ea8ec2398f79d1b3f41651b1b62231ff73a787bb0e6fede2edf6736f632" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.803280 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.830654 4728 scope.go:117] "RemoveContainer" containerID="12d6578e3e1eb4c34cfffefc544de6afb9af804cfbe32f9da79824b5225546a5" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.846620 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.859217 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.873224 4728 scope.go:117] "RemoveContainer" containerID="57bb2acba274931978eb2609877b0a41953c42c5b17804041cab34eaaa1f4356" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.889281 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:31:10 crc kubenswrapper[4728]: E0204 12:31:10.889690 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="init-config-reloader" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.889707 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="init-config-reloader" Feb 04 12:31:10 crc kubenswrapper[4728]: E0204 12:31:10.889722 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="thanos-sidecar" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.889728 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="thanos-sidecar" Feb 04 12:31:10 crc kubenswrapper[4728]: E0204 12:31:10.889738 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="config-reloader" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.889759 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="config-reloader" Feb 04 12:31:10 crc kubenswrapper[4728]: E0204 12:31:10.889779 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="prometheus" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.889785 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="prometheus" Feb 04 12:31:10 crc kubenswrapper[4728]: E0204 12:31:10.889797 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe2de05-304c-40c1-b022-49d46572fd38" containerName="collect-profiles" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.889803 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe2de05-304c-40c1-b022-49d46572fd38" containerName="collect-profiles" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.889983 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="prometheus" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.889992 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="thanos-sidecar" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.890002 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe2de05-304c-40c1-b022-49d46572fd38" containerName="collect-profiles" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.890020 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" containerName="config-reloader" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.891746 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.895425 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.895633 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-dpkpf" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.895859 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.895992 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.896356 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.896454 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.896546 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.896639 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.904940 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.908567 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.917915 4728 scope.go:117] "RemoveContainer" containerID="4c8212153d2a58c1172488fc4123805a7e3081ddf1997ba60a2ff504df17f97e" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.963939 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964234 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964305 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-config\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964332 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964407 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964463 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/34765442-96a6-4824-912b-1d94f7d2a4c3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964483 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964534 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964568 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964586 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/34765442-96a6-4824-912b-1d94f7d2a4c3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964608 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964649 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:10 crc kubenswrapper[4728]: I0204 12:31:10.964666 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jz27\" (UniqueName: \"kubernetes.io/projected/34765442-96a6-4824-912b-1d94f7d2a4c3-kube-api-access-7jz27\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.066495 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.066559 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.066672 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-config\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.066701 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.066743 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.066800 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/34765442-96a6-4824-912b-1d94f7d2a4c3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.066826 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.066878 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.066920 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.066944 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/34765442-96a6-4824-912b-1d94f7d2a4c3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.066974 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.067018 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.067045 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jz27\" (UniqueName: \"kubernetes.io/projected/34765442-96a6-4824-912b-1d94f7d2a4c3-kube-api-access-7jz27\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.068399 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.070204 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.070245 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.070443 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/34765442-96a6-4824-912b-1d94f7d2a4c3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.073279 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.073476 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.074055 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.074133 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-config\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.075480 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.077524 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34765442-96a6-4824-912b-1d94f7d2a4c3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.079104 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/34765442-96a6-4824-912b-1d94f7d2a4c3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.087478 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/34765442-96a6-4824-912b-1d94f7d2a4c3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.092078 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jz27\" (UniqueName: \"kubernetes.io/projected/34765442-96a6-4824-912b-1d94f7d2a4c3-kube-api-access-7jz27\") pod \"prometheus-metric-storage-0\" (UID: \"34765442-96a6-4824-912b-1d94f7d2a4c3\") " pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.229834 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:11 crc kubenswrapper[4728]: I0204 12:31:11.585588 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d5066fe-5f1f-4b22-88f6-3515d0a16cc3" path="/var/lib/kubelet/pods/9d5066fe-5f1f-4b22-88f6-3515d0a16cc3/volumes" Feb 04 12:31:12 crc kubenswrapper[4728]: I0204 12:31:12.196536 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 04 12:31:12 crc kubenswrapper[4728]: I0204 12:31:12.830898 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34765442-96a6-4824-912b-1d94f7d2a4c3","Type":"ContainerStarted","Data":"469b733534b3b1ecd874609d0a5d7223971cefe55976dc21bb25dba41178c494"} Feb 04 12:31:13 crc kubenswrapper[4728]: I0204 12:31:13.554070 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:31:13 crc kubenswrapper[4728]: E0204 12:31:13.554282 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:31:16 crc kubenswrapper[4728]: I0204 12:31:16.869734 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34765442-96a6-4824-912b-1d94f7d2a4c3","Type":"ContainerStarted","Data":"7e69fb1ce23a576a0ee5289a52d61d22ad73ae643aa9fab2197b4eeb21144ebf"} Feb 04 12:31:24 crc kubenswrapper[4728]: I0204 12:31:24.553813 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:31:24 crc kubenswrapper[4728]: E0204 12:31:24.554608 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:31:24 crc kubenswrapper[4728]: I0204 12:31:24.952312 4728 generic.go:334] "Generic (PLEG): container finished" podID="34765442-96a6-4824-912b-1d94f7d2a4c3" containerID="7e69fb1ce23a576a0ee5289a52d61d22ad73ae643aa9fab2197b4eeb21144ebf" exitCode=0 Feb 04 12:31:24 crc kubenswrapper[4728]: I0204 12:31:24.952356 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34765442-96a6-4824-912b-1d94f7d2a4c3","Type":"ContainerDied","Data":"7e69fb1ce23a576a0ee5289a52d61d22ad73ae643aa9fab2197b4eeb21144ebf"} Feb 04 12:31:25 crc kubenswrapper[4728]: I0204 12:31:25.962068 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34765442-96a6-4824-912b-1d94f7d2a4c3","Type":"ContainerStarted","Data":"50be5a490d265fca5d8f5176b0a9e88afafc90313ff8fa1eabe629ae42c03a44"} Feb 04 12:31:28 crc kubenswrapper[4728]: I0204 12:31:28.994837 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34765442-96a6-4824-912b-1d94f7d2a4c3","Type":"ContainerStarted","Data":"3d59fb2c291d36e77bee4bf49ff69735fddb690cd4f2aaf15a8f46487177fed5"} Feb 04 12:31:30 crc kubenswrapper[4728]: I0204 12:31:30.010139 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"34765442-96a6-4824-912b-1d94f7d2a4c3","Type":"ContainerStarted","Data":"786a8cec5c80a87f1f994fdd3c43d9a8dff7c10dccee43018498c35dd3b82a5c"} Feb 04 12:31:30 crc kubenswrapper[4728]: I0204 12:31:30.060689 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.060665861 podStartE2EDuration="20.060665861s" podCreationTimestamp="2026-02-04 12:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-04 12:31:30.050270747 +0000 UTC m=+3839.192975132" watchObservedRunningTime="2026-02-04 12:31:30.060665861 +0000 UTC m=+3839.203370246" Feb 04 12:31:31 crc kubenswrapper[4728]: I0204 12:31:31.229988 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:38 crc kubenswrapper[4728]: I0204 12:31:38.555078 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:31:38 crc kubenswrapper[4728]: E0204 12:31:38.555829 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:31:41 crc kubenswrapper[4728]: I0204 12:31:41.230845 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:41 crc kubenswrapper[4728]: I0204 12:31:41.237771 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:42 crc kubenswrapper[4728]: I0204 12:31:42.131290 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 04 12:31:49 crc kubenswrapper[4728]: I0204 12:31:49.554320 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:31:49 crc kubenswrapper[4728]: E0204 12:31:49.555120 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:32:03 crc kubenswrapper[4728]: I0204 12:32:03.555379 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:32:03 crc kubenswrapper[4728]: E0204 12:32:03.556793 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:32:15 crc kubenswrapper[4728]: I0204 12:32:15.554392 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:32:16 crc kubenswrapper[4728]: I0204 12:32:16.464535 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"d3827ef241d57a2fe47b067be7ffbf7b5cdcdfe8ffcb18ff63fee0c7bc2e54cd"} Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.621406 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c7fqm"] Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.624538 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.636964 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c7fqm"] Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.736679 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-utilities\") pod \"redhat-operators-c7fqm\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.736775 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-catalog-content\") pod \"redhat-operators-c7fqm\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.736840 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxxnx\" (UniqueName: \"kubernetes.io/projected/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-kube-api-access-sxxnx\") pod \"redhat-operators-c7fqm\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.839282 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-utilities\") pod \"redhat-operators-c7fqm\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.839342 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-catalog-content\") pod \"redhat-operators-c7fqm\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.839386 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxxnx\" (UniqueName: \"kubernetes.io/projected/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-kube-api-access-sxxnx\") pod \"redhat-operators-c7fqm\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.839878 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-utilities\") pod \"redhat-operators-c7fqm\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.840020 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-catalog-content\") pod \"redhat-operators-c7fqm\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.859603 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxxnx\" (UniqueName: \"kubernetes.io/projected/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-kube-api-access-sxxnx\") pod \"redhat-operators-c7fqm\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:26 crc kubenswrapper[4728]: I0204 12:33:26.954670 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:27 crc kubenswrapper[4728]: I0204 12:33:27.449331 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c7fqm"] Feb 04 12:33:28 crc kubenswrapper[4728]: I0204 12:33:28.144076 4728 generic.go:334] "Generic (PLEG): container finished" podID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerID="effcef786bfc9f21220e8fff2dca7e70ab7237aec3735381f2357db1e3979c5d" exitCode=0 Feb 04 12:33:28 crc kubenswrapper[4728]: I0204 12:33:28.144170 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7fqm" event={"ID":"3ea70e4e-a5cb-419f-bee0-790cfc30d04c","Type":"ContainerDied","Data":"effcef786bfc9f21220e8fff2dca7e70ab7237aec3735381f2357db1e3979c5d"} Feb 04 12:33:28 crc kubenswrapper[4728]: I0204 12:33:28.144357 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7fqm" event={"ID":"3ea70e4e-a5cb-419f-bee0-790cfc30d04c","Type":"ContainerStarted","Data":"9e104478d9bd6d7039ad04165ef1cb1c435241b2cde63f119314ede5b13f0c83"} Feb 04 12:33:29 crc kubenswrapper[4728]: I0204 12:33:29.156119 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7fqm" event={"ID":"3ea70e4e-a5cb-419f-bee0-790cfc30d04c","Type":"ContainerStarted","Data":"69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400"} Feb 04 12:33:33 crc kubenswrapper[4728]: I0204 12:33:33.196147 4728 generic.go:334] "Generic (PLEG): container finished" podID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerID="69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400" exitCode=0 Feb 04 12:33:33 crc kubenswrapper[4728]: I0204 12:33:33.196218 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7fqm" event={"ID":"3ea70e4e-a5cb-419f-bee0-790cfc30d04c","Type":"ContainerDied","Data":"69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400"} Feb 04 12:33:35 crc kubenswrapper[4728]: I0204 12:33:35.222400 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7fqm" event={"ID":"3ea70e4e-a5cb-419f-bee0-790cfc30d04c","Type":"ContainerStarted","Data":"7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d"} Feb 04 12:33:35 crc kubenswrapper[4728]: I0204 12:33:35.247517 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c7fqm" podStartSLOduration=3.088397565 podStartE2EDuration="9.24750216s" podCreationTimestamp="2026-02-04 12:33:26 +0000 UTC" firstStartedPulling="2026-02-04 12:33:28.146898605 +0000 UTC m=+3957.289602990" lastFinishedPulling="2026-02-04 12:33:34.3060032 +0000 UTC m=+3963.448707585" observedRunningTime="2026-02-04 12:33:35.243706146 +0000 UTC m=+3964.386410531" watchObservedRunningTime="2026-02-04 12:33:35.24750216 +0000 UTC m=+3964.390206535" Feb 04 12:33:36 crc kubenswrapper[4728]: I0204 12:33:36.955258 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:36 crc kubenswrapper[4728]: I0204 12:33:36.955594 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:38 crc kubenswrapper[4728]: I0204 12:33:38.005079 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c7fqm" podUID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerName="registry-server" probeResult="failure" output=< Feb 04 12:33:38 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 04 12:33:38 crc kubenswrapper[4728]: > Feb 04 12:33:47 crc kubenswrapper[4728]: I0204 12:33:47.029809 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:47 crc kubenswrapper[4728]: I0204 12:33:47.098126 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:47 crc kubenswrapper[4728]: I0204 12:33:47.270548 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c7fqm"] Feb 04 12:33:48 crc kubenswrapper[4728]: I0204 12:33:48.338186 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c7fqm" podUID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerName="registry-server" containerID="cri-o://7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d" gracePeriod=2 Feb 04 12:33:48 crc kubenswrapper[4728]: I0204 12:33:48.838598 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:48 crc kubenswrapper[4728]: I0204 12:33:48.925991 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-catalog-content\") pod \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " Feb 04 12:33:48 crc kubenswrapper[4728]: I0204 12:33:48.926131 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxxnx\" (UniqueName: \"kubernetes.io/projected/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-kube-api-access-sxxnx\") pod \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " Feb 04 12:33:48 crc kubenswrapper[4728]: I0204 12:33:48.926195 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-utilities\") pod \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\" (UID: \"3ea70e4e-a5cb-419f-bee0-790cfc30d04c\") " Feb 04 12:33:48 crc kubenswrapper[4728]: I0204 12:33:48.927351 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-utilities" (OuterVolumeSpecName: "utilities") pod "3ea70e4e-a5cb-419f-bee0-790cfc30d04c" (UID: "3ea70e4e-a5cb-419f-bee0-790cfc30d04c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:33:48 crc kubenswrapper[4728]: I0204 12:33:48.932514 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-kube-api-access-sxxnx" (OuterVolumeSpecName: "kube-api-access-sxxnx") pod "3ea70e4e-a5cb-419f-bee0-790cfc30d04c" (UID: "3ea70e4e-a5cb-419f-bee0-790cfc30d04c"). InnerVolumeSpecName "kube-api-access-sxxnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.027663 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxxnx\" (UniqueName: \"kubernetes.io/projected/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-kube-api-access-sxxnx\") on node \"crc\" DevicePath \"\"" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.027694 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.041708 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ea70e4e-a5cb-419f-bee0-790cfc30d04c" (UID: "3ea70e4e-a5cb-419f-bee0-790cfc30d04c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.129530 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea70e4e-a5cb-419f-bee0-790cfc30d04c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.349949 4728 generic.go:334] "Generic (PLEG): container finished" podID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerID="7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d" exitCode=0 Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.350022 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7fqm" event={"ID":"3ea70e4e-a5cb-419f-bee0-790cfc30d04c","Type":"ContainerDied","Data":"7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d"} Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.350036 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7fqm" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.350054 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7fqm" event={"ID":"3ea70e4e-a5cb-419f-bee0-790cfc30d04c","Type":"ContainerDied","Data":"9e104478d9bd6d7039ad04165ef1cb1c435241b2cde63f119314ede5b13f0c83"} Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.350077 4728 scope.go:117] "RemoveContainer" containerID="7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.370296 4728 scope.go:117] "RemoveContainer" containerID="69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.392246 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c7fqm"] Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.406954 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c7fqm"] Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.416080 4728 scope.go:117] "RemoveContainer" containerID="effcef786bfc9f21220e8fff2dca7e70ab7237aec3735381f2357db1e3979c5d" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.446268 4728 scope.go:117] "RemoveContainer" containerID="7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d" Feb 04 12:33:49 crc kubenswrapper[4728]: E0204 12:33:49.446888 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d\": container with ID starting with 7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d not found: ID does not exist" containerID="7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.446952 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d"} err="failed to get container status \"7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d\": rpc error: code = NotFound desc = could not find container \"7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d\": container with ID starting with 7315988978c1bbfea8fdeb323c12b9270dd7f600e231498763ce91e59e3e208d not found: ID does not exist" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.446990 4728 scope.go:117] "RemoveContainer" containerID="69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400" Feb 04 12:33:49 crc kubenswrapper[4728]: E0204 12:33:49.447629 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400\": container with ID starting with 69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400 not found: ID does not exist" containerID="69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.447671 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400"} err="failed to get container status \"69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400\": rpc error: code = NotFound desc = could not find container \"69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400\": container with ID starting with 69fc6c9db9ed9b002bab5ecf6ea03c069267b7a41fa95678b8541cea9716b400 not found: ID does not exist" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.447700 4728 scope.go:117] "RemoveContainer" containerID="effcef786bfc9f21220e8fff2dca7e70ab7237aec3735381f2357db1e3979c5d" Feb 04 12:33:49 crc kubenswrapper[4728]: E0204 12:33:49.447996 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"effcef786bfc9f21220e8fff2dca7e70ab7237aec3735381f2357db1e3979c5d\": container with ID starting with effcef786bfc9f21220e8fff2dca7e70ab7237aec3735381f2357db1e3979c5d not found: ID does not exist" containerID="effcef786bfc9f21220e8fff2dca7e70ab7237aec3735381f2357db1e3979c5d" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.448042 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effcef786bfc9f21220e8fff2dca7e70ab7237aec3735381f2357db1e3979c5d"} err="failed to get container status \"effcef786bfc9f21220e8fff2dca7e70ab7237aec3735381f2357db1e3979c5d\": rpc error: code = NotFound desc = could not find container \"effcef786bfc9f21220e8fff2dca7e70ab7237aec3735381f2357db1e3979c5d\": container with ID starting with effcef786bfc9f21220e8fff2dca7e70ab7237aec3735381f2357db1e3979c5d not found: ID does not exist" Feb 04 12:33:49 crc kubenswrapper[4728]: I0204 12:33:49.568077 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" path="/var/lib/kubelet/pods/3ea70e4e-a5cb-419f-bee0-790cfc30d04c/volumes" Feb 04 12:34:05 crc kubenswrapper[4728]: I0204 12:34:05.953279 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fwd6z"] Feb 04 12:34:05 crc kubenswrapper[4728]: E0204 12:34:05.955432 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerName="registry-server" Feb 04 12:34:05 crc kubenswrapper[4728]: I0204 12:34:05.955529 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerName="registry-server" Feb 04 12:34:05 crc kubenswrapper[4728]: E0204 12:34:05.955638 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerName="extract-content" Feb 04 12:34:05 crc kubenswrapper[4728]: I0204 12:34:05.955714 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerName="extract-content" Feb 04 12:34:05 crc kubenswrapper[4728]: E0204 12:34:05.955819 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerName="extract-utilities" Feb 04 12:34:05 crc kubenswrapper[4728]: I0204 12:34:05.955895 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerName="extract-utilities" Feb 04 12:34:05 crc kubenswrapper[4728]: I0204 12:34:05.956223 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea70e4e-a5cb-419f-bee0-790cfc30d04c" containerName="registry-server" Feb 04 12:34:05 crc kubenswrapper[4728]: I0204 12:34:05.958277 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:05 crc kubenswrapper[4728]: I0204 12:34:05.965029 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwd6z"] Feb 04 12:34:06 crc kubenswrapper[4728]: I0204 12:34:06.056356 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds2zp\" (UniqueName: \"kubernetes.io/projected/7df5c2d3-f927-4f1f-aefd-32e051d93007-kube-api-access-ds2zp\") pod \"redhat-marketplace-fwd6z\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:06 crc kubenswrapper[4728]: I0204 12:34:06.056460 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-catalog-content\") pod \"redhat-marketplace-fwd6z\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:06 crc kubenswrapper[4728]: I0204 12:34:06.056501 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-utilities\") pod \"redhat-marketplace-fwd6z\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:06 crc kubenswrapper[4728]: I0204 12:34:06.158508 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds2zp\" (UniqueName: \"kubernetes.io/projected/7df5c2d3-f927-4f1f-aefd-32e051d93007-kube-api-access-ds2zp\") pod \"redhat-marketplace-fwd6z\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:06 crc kubenswrapper[4728]: I0204 12:34:06.158622 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-catalog-content\") pod \"redhat-marketplace-fwd6z\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:06 crc kubenswrapper[4728]: I0204 12:34:06.158656 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-utilities\") pod \"redhat-marketplace-fwd6z\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:06 crc kubenswrapper[4728]: I0204 12:34:06.159054 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-catalog-content\") pod \"redhat-marketplace-fwd6z\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:06 crc kubenswrapper[4728]: I0204 12:34:06.159168 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-utilities\") pod \"redhat-marketplace-fwd6z\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:06 crc kubenswrapper[4728]: I0204 12:34:06.178476 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds2zp\" (UniqueName: \"kubernetes.io/projected/7df5c2d3-f927-4f1f-aefd-32e051d93007-kube-api-access-ds2zp\") pod \"redhat-marketplace-fwd6z\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:06 crc kubenswrapper[4728]: I0204 12:34:06.283332 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:06 crc kubenswrapper[4728]: I0204 12:34:06.792864 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwd6z"] Feb 04 12:34:06 crc kubenswrapper[4728]: W0204 12:34:06.794245 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7df5c2d3_f927_4f1f_aefd_32e051d93007.slice/crio-98ccf709371bb5d9c9e1a32fbc6836a1d51b7dcdbb12da6bc3635da10d4a238a WatchSource:0}: Error finding container 98ccf709371bb5d9c9e1a32fbc6836a1d51b7dcdbb12da6bc3635da10d4a238a: Status 404 returned error can't find the container with id 98ccf709371bb5d9c9e1a32fbc6836a1d51b7dcdbb12da6bc3635da10d4a238a Feb 04 12:34:07 crc kubenswrapper[4728]: I0204 12:34:07.525333 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwd6z" event={"ID":"7df5c2d3-f927-4f1f-aefd-32e051d93007","Type":"ContainerStarted","Data":"785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4"} Feb 04 12:34:07 crc kubenswrapper[4728]: I0204 12:34:07.525773 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwd6z" event={"ID":"7df5c2d3-f927-4f1f-aefd-32e051d93007","Type":"ContainerStarted","Data":"98ccf709371bb5d9c9e1a32fbc6836a1d51b7dcdbb12da6bc3635da10d4a238a"} Feb 04 12:34:08 crc kubenswrapper[4728]: I0204 12:34:08.537445 4728 generic.go:334] "Generic (PLEG): container finished" podID="7df5c2d3-f927-4f1f-aefd-32e051d93007" containerID="785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4" exitCode=0 Feb 04 12:34:08 crc kubenswrapper[4728]: I0204 12:34:08.537517 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwd6z" event={"ID":"7df5c2d3-f927-4f1f-aefd-32e051d93007","Type":"ContainerDied","Data":"785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4"} Feb 04 12:34:11 crc kubenswrapper[4728]: I0204 12:34:11.570621 4728 generic.go:334] "Generic (PLEG): container finished" podID="7df5c2d3-f927-4f1f-aefd-32e051d93007" containerID="274066ebae27871e7f86a1a5f8efd62d5bb6540725a6154020f7ed8281d48c6c" exitCode=0 Feb 04 12:34:11 crc kubenswrapper[4728]: I0204 12:34:11.570711 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwd6z" event={"ID":"7df5c2d3-f927-4f1f-aefd-32e051d93007","Type":"ContainerDied","Data":"274066ebae27871e7f86a1a5f8efd62d5bb6540725a6154020f7ed8281d48c6c"} Feb 04 12:34:13 crc kubenswrapper[4728]: I0204 12:34:13.589181 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwd6z" event={"ID":"7df5c2d3-f927-4f1f-aefd-32e051d93007","Type":"ContainerStarted","Data":"222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e"} Feb 04 12:34:13 crc kubenswrapper[4728]: I0204 12:34:13.623030 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fwd6z" podStartSLOduration=4.5441933070000005 podStartE2EDuration="8.623004053s" podCreationTimestamp="2026-02-04 12:34:05 +0000 UTC" firstStartedPulling="2026-02-04 12:34:08.539827039 +0000 UTC m=+3997.682531424" lastFinishedPulling="2026-02-04 12:34:12.618637785 +0000 UTC m=+4001.761342170" observedRunningTime="2026-02-04 12:34:13.620277048 +0000 UTC m=+4002.762981443" watchObservedRunningTime="2026-02-04 12:34:13.623004053 +0000 UTC m=+4002.765708438" Feb 04 12:34:16 crc kubenswrapper[4728]: I0204 12:34:16.284477 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:16 crc kubenswrapper[4728]: I0204 12:34:16.284837 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:16 crc kubenswrapper[4728]: I0204 12:34:16.337200 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:26 crc kubenswrapper[4728]: I0204 12:34:26.331387 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:26 crc kubenswrapper[4728]: I0204 12:34:26.379311 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwd6z"] Feb 04 12:34:26 crc kubenswrapper[4728]: I0204 12:34:26.707890 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fwd6z" podUID="7df5c2d3-f927-4f1f-aefd-32e051d93007" containerName="registry-server" containerID="cri-o://222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e" gracePeriod=2 Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.219390 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.319160 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-catalog-content\") pod \"7df5c2d3-f927-4f1f-aefd-32e051d93007\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.319453 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds2zp\" (UniqueName: \"kubernetes.io/projected/7df5c2d3-f927-4f1f-aefd-32e051d93007-kube-api-access-ds2zp\") pod \"7df5c2d3-f927-4f1f-aefd-32e051d93007\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.319555 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-utilities\") pod \"7df5c2d3-f927-4f1f-aefd-32e051d93007\" (UID: \"7df5c2d3-f927-4f1f-aefd-32e051d93007\") " Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.320717 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-utilities" (OuterVolumeSpecName: "utilities") pod "7df5c2d3-f927-4f1f-aefd-32e051d93007" (UID: "7df5c2d3-f927-4f1f-aefd-32e051d93007"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.326247 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df5c2d3-f927-4f1f-aefd-32e051d93007-kube-api-access-ds2zp" (OuterVolumeSpecName: "kube-api-access-ds2zp") pod "7df5c2d3-f927-4f1f-aefd-32e051d93007" (UID: "7df5c2d3-f927-4f1f-aefd-32e051d93007"). InnerVolumeSpecName "kube-api-access-ds2zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.348378 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7df5c2d3-f927-4f1f-aefd-32e051d93007" (UID: "7df5c2d3-f927-4f1f-aefd-32e051d93007"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.422171 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.422211 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds2zp\" (UniqueName: \"kubernetes.io/projected/7df5c2d3-f927-4f1f-aefd-32e051d93007-kube-api-access-ds2zp\") on node \"crc\" DevicePath \"\"" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.422225 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df5c2d3-f927-4f1f-aefd-32e051d93007-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.718599 4728 generic.go:334] "Generic (PLEG): container finished" podID="7df5c2d3-f927-4f1f-aefd-32e051d93007" containerID="222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e" exitCode=0 Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.718640 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwd6z" event={"ID":"7df5c2d3-f927-4f1f-aefd-32e051d93007","Type":"ContainerDied","Data":"222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e"} Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.718667 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwd6z" event={"ID":"7df5c2d3-f927-4f1f-aefd-32e051d93007","Type":"ContainerDied","Data":"98ccf709371bb5d9c9e1a32fbc6836a1d51b7dcdbb12da6bc3635da10d4a238a"} Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.718686 4728 scope.go:117] "RemoveContainer" containerID="222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.718680 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwd6z" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.748307 4728 scope.go:117] "RemoveContainer" containerID="274066ebae27871e7f86a1a5f8efd62d5bb6540725a6154020f7ed8281d48c6c" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.751321 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwd6z"] Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.760902 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwd6z"] Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.769263 4728 scope.go:117] "RemoveContainer" containerID="785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.827044 4728 scope.go:117] "RemoveContainer" containerID="222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e" Feb 04 12:34:27 crc kubenswrapper[4728]: E0204 12:34:27.827566 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e\": container with ID starting with 222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e not found: ID does not exist" containerID="222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.827610 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e"} err="failed to get container status \"222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e\": rpc error: code = NotFound desc = could not find container \"222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e\": container with ID starting with 222199b2adc6dc2e3c781d0bce6ca9ab818fa4f725ae81c7d74a525c19ec0f4e not found: ID does not exist" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.827636 4728 scope.go:117] "RemoveContainer" containerID="274066ebae27871e7f86a1a5f8efd62d5bb6540725a6154020f7ed8281d48c6c" Feb 04 12:34:27 crc kubenswrapper[4728]: E0204 12:34:27.828069 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"274066ebae27871e7f86a1a5f8efd62d5bb6540725a6154020f7ed8281d48c6c\": container with ID starting with 274066ebae27871e7f86a1a5f8efd62d5bb6540725a6154020f7ed8281d48c6c not found: ID does not exist" containerID="274066ebae27871e7f86a1a5f8efd62d5bb6540725a6154020f7ed8281d48c6c" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.828131 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"274066ebae27871e7f86a1a5f8efd62d5bb6540725a6154020f7ed8281d48c6c"} err="failed to get container status \"274066ebae27871e7f86a1a5f8efd62d5bb6540725a6154020f7ed8281d48c6c\": rpc error: code = NotFound desc = could not find container \"274066ebae27871e7f86a1a5f8efd62d5bb6540725a6154020f7ed8281d48c6c\": container with ID starting with 274066ebae27871e7f86a1a5f8efd62d5bb6540725a6154020f7ed8281d48c6c not found: ID does not exist" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.828167 4728 scope.go:117] "RemoveContainer" containerID="785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4" Feb 04 12:34:27 crc kubenswrapper[4728]: E0204 12:34:27.828626 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4\": container with ID starting with 785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4 not found: ID does not exist" containerID="785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4" Feb 04 12:34:27 crc kubenswrapper[4728]: I0204 12:34:27.828677 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4"} err="failed to get container status \"785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4\": rpc error: code = NotFound desc = could not find container \"785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4\": container with ID starting with 785a628780913ad3bef510e12ffe349681394674cf9fbefe95b0eceb84ef71f4 not found: ID does not exist" Feb 04 12:34:29 crc kubenswrapper[4728]: I0204 12:34:29.566647 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df5c2d3-f927-4f1f-aefd-32e051d93007" path="/var/lib/kubelet/pods/7df5c2d3-f927-4f1f-aefd-32e051d93007/volumes" Feb 04 12:34:35 crc kubenswrapper[4728]: I0204 12:34:35.448606 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:34:35 crc kubenswrapper[4728]: I0204 12:34:35.449152 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:35:05 crc kubenswrapper[4728]: I0204 12:35:05.447933 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:35:05 crc kubenswrapper[4728]: I0204 12:35:05.448549 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:35:08 crc kubenswrapper[4728]: I0204 12:35:08.512421 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6dcb54f59-lnlx2_3b49d7d8-7c63-482c-b882-25c01e798afe/manager/0.log" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.646724 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kknjn/must-gather-xgvtw"] Feb 04 12:35:33 crc kubenswrapper[4728]: E0204 12:35:33.647658 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df5c2d3-f927-4f1f-aefd-32e051d93007" containerName="extract-utilities" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.647672 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df5c2d3-f927-4f1f-aefd-32e051d93007" containerName="extract-utilities" Feb 04 12:35:33 crc kubenswrapper[4728]: E0204 12:35:33.647687 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df5c2d3-f927-4f1f-aefd-32e051d93007" containerName="extract-content" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.647693 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df5c2d3-f927-4f1f-aefd-32e051d93007" containerName="extract-content" Feb 04 12:35:33 crc kubenswrapper[4728]: E0204 12:35:33.647719 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df5c2d3-f927-4f1f-aefd-32e051d93007" containerName="registry-server" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.647727 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df5c2d3-f927-4f1f-aefd-32e051d93007" containerName="registry-server" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.647948 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df5c2d3-f927-4f1f-aefd-32e051d93007" containerName="registry-server" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.650519 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/must-gather-xgvtw" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.653357 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kknjn"/"kube-root-ca.crt" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.653396 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kknjn"/"openshift-service-ca.crt" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.653827 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kknjn"/"default-dockercfg-58nv8" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.666630 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kknjn/must-gather-xgvtw"] Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.741633 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bdb7756f-ba5a-4b21-b273-33044aa95835-must-gather-output\") pod \"must-gather-xgvtw\" (UID: \"bdb7756f-ba5a-4b21-b273-33044aa95835\") " pod="openshift-must-gather-kknjn/must-gather-xgvtw" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.741741 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7wtr\" (UniqueName: \"kubernetes.io/projected/bdb7756f-ba5a-4b21-b273-33044aa95835-kube-api-access-j7wtr\") pod \"must-gather-xgvtw\" (UID: \"bdb7756f-ba5a-4b21-b273-33044aa95835\") " pod="openshift-must-gather-kknjn/must-gather-xgvtw" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.843772 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7wtr\" (UniqueName: \"kubernetes.io/projected/bdb7756f-ba5a-4b21-b273-33044aa95835-kube-api-access-j7wtr\") pod \"must-gather-xgvtw\" (UID: \"bdb7756f-ba5a-4b21-b273-33044aa95835\") " pod="openshift-must-gather-kknjn/must-gather-xgvtw" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.844231 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bdb7756f-ba5a-4b21-b273-33044aa95835-must-gather-output\") pod \"must-gather-xgvtw\" (UID: \"bdb7756f-ba5a-4b21-b273-33044aa95835\") " pod="openshift-must-gather-kknjn/must-gather-xgvtw" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.844748 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bdb7756f-ba5a-4b21-b273-33044aa95835-must-gather-output\") pod \"must-gather-xgvtw\" (UID: \"bdb7756f-ba5a-4b21-b273-33044aa95835\") " pod="openshift-must-gather-kknjn/must-gather-xgvtw" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.952436 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7wtr\" (UniqueName: \"kubernetes.io/projected/bdb7756f-ba5a-4b21-b273-33044aa95835-kube-api-access-j7wtr\") pod \"must-gather-xgvtw\" (UID: \"bdb7756f-ba5a-4b21-b273-33044aa95835\") " pod="openshift-must-gather-kknjn/must-gather-xgvtw" Feb 04 12:35:33 crc kubenswrapper[4728]: I0204 12:35:33.977679 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/must-gather-xgvtw" Feb 04 12:35:34 crc kubenswrapper[4728]: I0204 12:35:34.451239 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kknjn/must-gather-xgvtw"] Feb 04 12:35:34 crc kubenswrapper[4728]: I0204 12:35:34.457639 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 12:35:35 crc kubenswrapper[4728]: I0204 12:35:35.330544 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kknjn/must-gather-xgvtw" event={"ID":"bdb7756f-ba5a-4b21-b273-33044aa95835","Type":"ContainerStarted","Data":"8f42baf64c82fff2a63ae45e232e8458128cc74041af803fac2ecc99c774db01"} Feb 04 12:35:35 crc kubenswrapper[4728]: I0204 12:35:35.448794 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:35:35 crc kubenswrapper[4728]: I0204 12:35:35.448853 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:35:35 crc kubenswrapper[4728]: I0204 12:35:35.448912 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 12:35:35 crc kubenswrapper[4728]: I0204 12:35:35.449719 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3827ef241d57a2fe47b067be7ffbf7b5cdcdfe8ffcb18ff63fee0c7bc2e54cd"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 12:35:35 crc kubenswrapper[4728]: I0204 12:35:35.449788 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://d3827ef241d57a2fe47b067be7ffbf7b5cdcdfe8ffcb18ff63fee0c7bc2e54cd" gracePeriod=600 Feb 04 12:35:36 crc kubenswrapper[4728]: I0204 12:35:36.342417 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="d3827ef241d57a2fe47b067be7ffbf7b5cdcdfe8ffcb18ff63fee0c7bc2e54cd" exitCode=0 Feb 04 12:35:36 crc kubenswrapper[4728]: I0204 12:35:36.342506 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"d3827ef241d57a2fe47b067be7ffbf7b5cdcdfe8ffcb18ff63fee0c7bc2e54cd"} Feb 04 12:35:36 crc kubenswrapper[4728]: I0204 12:35:36.342762 4728 scope.go:117] "RemoveContainer" containerID="bc6b14c1126e86d554eb9962762376030db0073a11ded5b51142682160fc46f9" Feb 04 12:35:37 crc kubenswrapper[4728]: I0204 12:35:37.373707 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e"} Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.295870 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gcktt"] Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.298228 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.309221 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gcktt"] Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.434126 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-utilities\") pod \"community-operators-gcktt\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.434476 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4x4d\" (UniqueName: \"kubernetes.io/projected/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-kube-api-access-n4x4d\") pod \"community-operators-gcktt\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.434578 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-catalog-content\") pod \"community-operators-gcktt\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.537066 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-utilities\") pod \"community-operators-gcktt\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.538016 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-utilities\") pod \"community-operators-gcktt\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.538033 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4x4d\" (UniqueName: \"kubernetes.io/projected/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-kube-api-access-n4x4d\") pod \"community-operators-gcktt\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.538627 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-catalog-content\") pod \"community-operators-gcktt\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.539126 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-catalog-content\") pod \"community-operators-gcktt\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.573835 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4x4d\" (UniqueName: \"kubernetes.io/projected/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-kube-api-access-n4x4d\") pod \"community-operators-gcktt\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:38 crc kubenswrapper[4728]: I0204 12:35:38.627506 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:39 crc kubenswrapper[4728]: I0204 12:35:39.307100 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gcktt"] Feb 04 12:35:39 crc kubenswrapper[4728]: I0204 12:35:39.398775 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcktt" event={"ID":"2d617ec7-fd9c-415f-9044-4f9c9e395e7e","Type":"ContainerStarted","Data":"60376f7e9697c48de48c0bc43eca63b9653e45f9605fd65e79ddab444b4e05e6"} Feb 04 12:35:40 crc kubenswrapper[4728]: I0204 12:35:40.409189 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kknjn/must-gather-xgvtw" event={"ID":"bdb7756f-ba5a-4b21-b273-33044aa95835","Type":"ContainerStarted","Data":"2b3e2bbc5df57c9e1180017ef981495d5d09d6e7610db1314fc97b4e50432499"} Feb 04 12:35:40 crc kubenswrapper[4728]: I0204 12:35:40.410964 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kknjn/must-gather-xgvtw" event={"ID":"bdb7756f-ba5a-4b21-b273-33044aa95835","Type":"ContainerStarted","Data":"8d3734f3d9bffd6e6d55e27a08a05213b43eef5033e8d5daaeb23976aac2e57d"} Feb 04 12:35:40 crc kubenswrapper[4728]: I0204 12:35:40.411799 4728 generic.go:334] "Generic (PLEG): container finished" podID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerID="400c28686e65cad93566e0ce2638359be146f4bad286e76f9d7418efbcca6b6e" exitCode=0 Feb 04 12:35:40 crc kubenswrapper[4728]: I0204 12:35:40.411837 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcktt" event={"ID":"2d617ec7-fd9c-415f-9044-4f9c9e395e7e","Type":"ContainerDied","Data":"400c28686e65cad93566e0ce2638359be146f4bad286e76f9d7418efbcca6b6e"} Feb 04 12:35:40 crc kubenswrapper[4728]: I0204 12:35:40.436054 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kknjn/must-gather-xgvtw" podStartSLOduration=2.4636774040000002 podStartE2EDuration="7.436028736s" podCreationTimestamp="2026-02-04 12:35:33 +0000 UTC" firstStartedPulling="2026-02-04 12:35:34.457429969 +0000 UTC m=+4083.600134354" lastFinishedPulling="2026-02-04 12:35:39.429781301 +0000 UTC m=+4088.572485686" observedRunningTime="2026-02-04 12:35:40.428510082 +0000 UTC m=+4089.571214517" watchObservedRunningTime="2026-02-04 12:35:40.436028736 +0000 UTC m=+4089.578733131" Feb 04 12:35:42 crc kubenswrapper[4728]: I0204 12:35:42.431528 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcktt" event={"ID":"2d617ec7-fd9c-415f-9044-4f9c9e395e7e","Type":"ContainerStarted","Data":"25ff6e0dd941cf4003c218ee9dc3a98358de9d6a6054a12004c31dea0ac21130"} Feb 04 12:35:43 crc kubenswrapper[4728]: I0204 12:35:43.442049 4728 generic.go:334] "Generic (PLEG): container finished" podID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerID="25ff6e0dd941cf4003c218ee9dc3a98358de9d6a6054a12004c31dea0ac21130" exitCode=0 Feb 04 12:35:43 crc kubenswrapper[4728]: I0204 12:35:43.442118 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcktt" event={"ID":"2d617ec7-fd9c-415f-9044-4f9c9e395e7e","Type":"ContainerDied","Data":"25ff6e0dd941cf4003c218ee9dc3a98358de9d6a6054a12004c31dea0ac21130"} Feb 04 12:35:48 crc kubenswrapper[4728]: I0204 12:35:48.552669 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcktt" event={"ID":"2d617ec7-fd9c-415f-9044-4f9c9e395e7e","Type":"ContainerStarted","Data":"b38e75be9031cd47afbd363f99da042195edd496cfc19d44172ccc4c44964cf8"} Feb 04 12:35:48 crc kubenswrapper[4728]: I0204 12:35:48.587831 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gcktt" podStartSLOduration=3.628165251 podStartE2EDuration="10.587814288s" podCreationTimestamp="2026-02-04 12:35:38 +0000 UTC" firstStartedPulling="2026-02-04 12:35:40.413400302 +0000 UTC m=+4089.556104687" lastFinishedPulling="2026-02-04 12:35:47.373049339 +0000 UTC m=+4096.515753724" observedRunningTime="2026-02-04 12:35:48.583185904 +0000 UTC m=+4097.725890309" watchObservedRunningTime="2026-02-04 12:35:48.587814288 +0000 UTC m=+4097.730518673" Feb 04 12:35:48 crc kubenswrapper[4728]: I0204 12:35:48.628683 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:48 crc kubenswrapper[4728]: I0204 12:35:48.628743 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:49 crc kubenswrapper[4728]: I0204 12:35:49.679601 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gcktt" podUID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerName="registry-server" probeResult="failure" output=< Feb 04 12:35:49 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 04 12:35:49 crc kubenswrapper[4728]: > Feb 04 12:35:51 crc kubenswrapper[4728]: I0204 12:35:51.965959 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kknjn/crc-debug-vx6tq"] Feb 04 12:35:51 crc kubenswrapper[4728]: I0204 12:35:51.968398 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/crc-debug-vx6tq" Feb 04 12:35:52 crc kubenswrapper[4728]: I0204 12:35:52.057407 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8b7w\" (UniqueName: \"kubernetes.io/projected/53fc204f-95c1-4504-9ae9-907cfb23b93e-kube-api-access-n8b7w\") pod \"crc-debug-vx6tq\" (UID: \"53fc204f-95c1-4504-9ae9-907cfb23b93e\") " pod="openshift-must-gather-kknjn/crc-debug-vx6tq" Feb 04 12:35:52 crc kubenswrapper[4728]: I0204 12:35:52.057504 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/53fc204f-95c1-4504-9ae9-907cfb23b93e-host\") pod \"crc-debug-vx6tq\" (UID: \"53fc204f-95c1-4504-9ae9-907cfb23b93e\") " pod="openshift-must-gather-kknjn/crc-debug-vx6tq" Feb 04 12:35:52 crc kubenswrapper[4728]: I0204 12:35:52.159882 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8b7w\" (UniqueName: \"kubernetes.io/projected/53fc204f-95c1-4504-9ae9-907cfb23b93e-kube-api-access-n8b7w\") pod \"crc-debug-vx6tq\" (UID: \"53fc204f-95c1-4504-9ae9-907cfb23b93e\") " pod="openshift-must-gather-kknjn/crc-debug-vx6tq" Feb 04 12:35:52 crc kubenswrapper[4728]: I0204 12:35:52.159964 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/53fc204f-95c1-4504-9ae9-907cfb23b93e-host\") pod \"crc-debug-vx6tq\" (UID: \"53fc204f-95c1-4504-9ae9-907cfb23b93e\") " pod="openshift-must-gather-kknjn/crc-debug-vx6tq" Feb 04 12:35:52 crc kubenswrapper[4728]: I0204 12:35:52.160073 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/53fc204f-95c1-4504-9ae9-907cfb23b93e-host\") pod \"crc-debug-vx6tq\" (UID: \"53fc204f-95c1-4504-9ae9-907cfb23b93e\") " pod="openshift-must-gather-kknjn/crc-debug-vx6tq" Feb 04 12:35:52 crc kubenswrapper[4728]: I0204 12:35:52.189063 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8b7w\" (UniqueName: \"kubernetes.io/projected/53fc204f-95c1-4504-9ae9-907cfb23b93e-kube-api-access-n8b7w\") pod \"crc-debug-vx6tq\" (UID: \"53fc204f-95c1-4504-9ae9-907cfb23b93e\") " pod="openshift-must-gather-kknjn/crc-debug-vx6tq" Feb 04 12:35:52 crc kubenswrapper[4728]: I0204 12:35:52.287077 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/crc-debug-vx6tq" Feb 04 12:35:52 crc kubenswrapper[4728]: W0204 12:35:52.329796 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53fc204f_95c1_4504_9ae9_907cfb23b93e.slice/crio-bb8b1610ab3b0babec011e97aa2bef481dc1773bb7f3a7f790fe2d85ab1f928e WatchSource:0}: Error finding container bb8b1610ab3b0babec011e97aa2bef481dc1773bb7f3a7f790fe2d85ab1f928e: Status 404 returned error can't find the container with id bb8b1610ab3b0babec011e97aa2bef481dc1773bb7f3a7f790fe2d85ab1f928e Feb 04 12:35:52 crc kubenswrapper[4728]: I0204 12:35:52.588848 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kknjn/crc-debug-vx6tq" event={"ID":"53fc204f-95c1-4504-9ae9-907cfb23b93e","Type":"ContainerStarted","Data":"bb8b1610ab3b0babec011e97aa2bef481dc1773bb7f3a7f790fe2d85ab1f928e"} Feb 04 12:35:58 crc kubenswrapper[4728]: I0204 12:35:58.694562 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:58 crc kubenswrapper[4728]: I0204 12:35:58.763511 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:35:58 crc kubenswrapper[4728]: I0204 12:35:58.948863 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gcktt"] Feb 04 12:36:00 crc kubenswrapper[4728]: I0204 12:36:00.681363 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gcktt" podUID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerName="registry-server" containerID="cri-o://b38e75be9031cd47afbd363f99da042195edd496cfc19d44172ccc4c44964cf8" gracePeriod=2 Feb 04 12:36:02 crc kubenswrapper[4728]: I0204 12:36:02.702585 4728 generic.go:334] "Generic (PLEG): container finished" podID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerID="b38e75be9031cd47afbd363f99da042195edd496cfc19d44172ccc4c44964cf8" exitCode=0 Feb 04 12:36:02 crc kubenswrapper[4728]: I0204 12:36:02.703324 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcktt" event={"ID":"2d617ec7-fd9c-415f-9044-4f9c9e395e7e","Type":"ContainerDied","Data":"b38e75be9031cd47afbd363f99da042195edd496cfc19d44172ccc4c44964cf8"} Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.485621 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.590557 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4x4d\" (UniqueName: \"kubernetes.io/projected/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-kube-api-access-n4x4d\") pod \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.590955 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-catalog-content\") pod \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.591092 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-utilities\") pod \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\" (UID: \"2d617ec7-fd9c-415f-9044-4f9c9e395e7e\") " Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.594448 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-utilities" (OuterVolumeSpecName: "utilities") pod "2d617ec7-fd9c-415f-9044-4f9c9e395e7e" (UID: "2d617ec7-fd9c-415f-9044-4f9c9e395e7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.600475 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-kube-api-access-n4x4d" (OuterVolumeSpecName: "kube-api-access-n4x4d") pod "2d617ec7-fd9c-415f-9044-4f9c9e395e7e" (UID: "2d617ec7-fd9c-415f-9044-4f9c9e395e7e"). InnerVolumeSpecName "kube-api-access-n4x4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.644963 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d617ec7-fd9c-415f-9044-4f9c9e395e7e" (UID: "2d617ec7-fd9c-415f-9044-4f9c9e395e7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.693322 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4x4d\" (UniqueName: \"kubernetes.io/projected/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-kube-api-access-n4x4d\") on node \"crc\" DevicePath \"\"" Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.693628 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.693645 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d617ec7-fd9c-415f-9044-4f9c9e395e7e-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.724508 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcktt" event={"ID":"2d617ec7-fd9c-415f-9044-4f9c9e395e7e","Type":"ContainerDied","Data":"60376f7e9697c48de48c0bc43eca63b9653e45f9605fd65e79ddab444b4e05e6"} Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.724580 4728 scope.go:117] "RemoveContainer" containerID="b38e75be9031cd47afbd363f99da042195edd496cfc19d44172ccc4c44964cf8" Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.724732 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcktt" Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.735959 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kknjn/crc-debug-vx6tq" event={"ID":"53fc204f-95c1-4504-9ae9-907cfb23b93e","Type":"ContainerStarted","Data":"86041b071c0f4470dc4fe6668dddd4ce3c9d97a84af61789677a4944574eccb0"} Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.759561 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kknjn/crc-debug-vx6tq" podStartSLOduration=1.931424936 podStartE2EDuration="12.75954405s" podCreationTimestamp="2026-02-04 12:35:51 +0000 UTC" firstStartedPulling="2026-02-04 12:35:52.332522849 +0000 UTC m=+4101.475227234" lastFinishedPulling="2026-02-04 12:36:03.160641963 +0000 UTC m=+4112.303346348" observedRunningTime="2026-02-04 12:36:03.751899602 +0000 UTC m=+4112.894603987" watchObservedRunningTime="2026-02-04 12:36:03.75954405 +0000 UTC m=+4112.902248455" Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.768209 4728 scope.go:117] "RemoveContainer" containerID="25ff6e0dd941cf4003c218ee9dc3a98358de9d6a6054a12004c31dea0ac21130" Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.787878 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gcktt"] Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.796952 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gcktt"] Feb 04 12:36:03 crc kubenswrapper[4728]: I0204 12:36:03.801099 4728 scope.go:117] "RemoveContainer" containerID="400c28686e65cad93566e0ce2638359be146f4bad286e76f9d7418efbcca6b6e" Feb 04 12:36:05 crc kubenswrapper[4728]: I0204 12:36:05.567193 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" path="/var/lib/kubelet/pods/2d617ec7-fd9c-415f-9044-4f9c9e395e7e/volumes" Feb 04 12:36:32 crc kubenswrapper[4728]: I0204 12:36:32.002460 4728 generic.go:334] "Generic (PLEG): container finished" podID="53fc204f-95c1-4504-9ae9-907cfb23b93e" containerID="86041b071c0f4470dc4fe6668dddd4ce3c9d97a84af61789677a4944574eccb0" exitCode=0 Feb 04 12:36:32 crc kubenswrapper[4728]: I0204 12:36:32.002549 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kknjn/crc-debug-vx6tq" event={"ID":"53fc204f-95c1-4504-9ae9-907cfb23b93e","Type":"ContainerDied","Data":"86041b071c0f4470dc4fe6668dddd4ce3c9d97a84af61789677a4944574eccb0"} Feb 04 12:36:33 crc kubenswrapper[4728]: I0204 12:36:33.155973 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/crc-debug-vx6tq" Feb 04 12:36:33 crc kubenswrapper[4728]: I0204 12:36:33.194915 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kknjn/crc-debug-vx6tq"] Feb 04 12:36:33 crc kubenswrapper[4728]: I0204 12:36:33.203770 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kknjn/crc-debug-vx6tq"] Feb 04 12:36:33 crc kubenswrapper[4728]: I0204 12:36:33.280179 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/53fc204f-95c1-4504-9ae9-907cfb23b93e-host\") pod \"53fc204f-95c1-4504-9ae9-907cfb23b93e\" (UID: \"53fc204f-95c1-4504-9ae9-907cfb23b93e\") " Feb 04 12:36:33 crc kubenswrapper[4728]: I0204 12:36:33.280292 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53fc204f-95c1-4504-9ae9-907cfb23b93e-host" (OuterVolumeSpecName: "host") pod "53fc204f-95c1-4504-9ae9-907cfb23b93e" (UID: "53fc204f-95c1-4504-9ae9-907cfb23b93e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 12:36:33 crc kubenswrapper[4728]: I0204 12:36:33.280343 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8b7w\" (UniqueName: \"kubernetes.io/projected/53fc204f-95c1-4504-9ae9-907cfb23b93e-kube-api-access-n8b7w\") pod \"53fc204f-95c1-4504-9ae9-907cfb23b93e\" (UID: \"53fc204f-95c1-4504-9ae9-907cfb23b93e\") " Feb 04 12:36:33 crc kubenswrapper[4728]: I0204 12:36:33.280986 4728 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/53fc204f-95c1-4504-9ae9-907cfb23b93e-host\") on node \"crc\" DevicePath \"\"" Feb 04 12:36:33 crc kubenswrapper[4728]: I0204 12:36:33.290984 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53fc204f-95c1-4504-9ae9-907cfb23b93e-kube-api-access-n8b7w" (OuterVolumeSpecName: "kube-api-access-n8b7w") pod "53fc204f-95c1-4504-9ae9-907cfb23b93e" (UID: "53fc204f-95c1-4504-9ae9-907cfb23b93e"). InnerVolumeSpecName "kube-api-access-n8b7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:36:33 crc kubenswrapper[4728]: I0204 12:36:33.382697 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8b7w\" (UniqueName: \"kubernetes.io/projected/53fc204f-95c1-4504-9ae9-907cfb23b93e-kube-api-access-n8b7w\") on node \"crc\" DevicePath \"\"" Feb 04 12:36:33 crc kubenswrapper[4728]: I0204 12:36:33.564696 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53fc204f-95c1-4504-9ae9-907cfb23b93e" path="/var/lib/kubelet/pods/53fc204f-95c1-4504-9ae9-907cfb23b93e/volumes" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.022929 4728 scope.go:117] "RemoveContainer" containerID="86041b071c0f4470dc4fe6668dddd4ce3c9d97a84af61789677a4944574eccb0" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.023182 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/crc-debug-vx6tq" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.428633 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kknjn/crc-debug-t7n9k"] Feb 04 12:36:34 crc kubenswrapper[4728]: E0204 12:36:34.429284 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fc204f-95c1-4504-9ae9-907cfb23b93e" containerName="container-00" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.429295 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fc204f-95c1-4504-9ae9-907cfb23b93e" containerName="container-00" Feb 04 12:36:34 crc kubenswrapper[4728]: E0204 12:36:34.429312 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerName="registry-server" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.429318 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerName="registry-server" Feb 04 12:36:34 crc kubenswrapper[4728]: E0204 12:36:34.429336 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerName="extract-content" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.429343 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerName="extract-content" Feb 04 12:36:34 crc kubenswrapper[4728]: E0204 12:36:34.429358 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerName="extract-utilities" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.429364 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerName="extract-utilities" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.429565 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="53fc204f-95c1-4504-9ae9-907cfb23b93e" containerName="container-00" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.429579 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d617ec7-fd9c-415f-9044-4f9c9e395e7e" containerName="registry-server" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.430271 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/crc-debug-t7n9k" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.604644 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29afef70-d824-4c2b-b5fd-2b344a8326b1-host\") pod \"crc-debug-t7n9k\" (UID: \"29afef70-d824-4c2b-b5fd-2b344a8326b1\") " pod="openshift-must-gather-kknjn/crc-debug-t7n9k" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.605068 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcv5w\" (UniqueName: \"kubernetes.io/projected/29afef70-d824-4c2b-b5fd-2b344a8326b1-kube-api-access-hcv5w\") pod \"crc-debug-t7n9k\" (UID: \"29afef70-d824-4c2b-b5fd-2b344a8326b1\") " pod="openshift-must-gather-kknjn/crc-debug-t7n9k" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.706863 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29afef70-d824-4c2b-b5fd-2b344a8326b1-host\") pod \"crc-debug-t7n9k\" (UID: \"29afef70-d824-4c2b-b5fd-2b344a8326b1\") " pod="openshift-must-gather-kknjn/crc-debug-t7n9k" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.707036 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcv5w\" (UniqueName: \"kubernetes.io/projected/29afef70-d824-4c2b-b5fd-2b344a8326b1-kube-api-access-hcv5w\") pod \"crc-debug-t7n9k\" (UID: \"29afef70-d824-4c2b-b5fd-2b344a8326b1\") " pod="openshift-must-gather-kknjn/crc-debug-t7n9k" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.707049 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29afef70-d824-4c2b-b5fd-2b344a8326b1-host\") pod \"crc-debug-t7n9k\" (UID: \"29afef70-d824-4c2b-b5fd-2b344a8326b1\") " pod="openshift-must-gather-kknjn/crc-debug-t7n9k" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.727298 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcv5w\" (UniqueName: \"kubernetes.io/projected/29afef70-d824-4c2b-b5fd-2b344a8326b1-kube-api-access-hcv5w\") pod \"crc-debug-t7n9k\" (UID: \"29afef70-d824-4c2b-b5fd-2b344a8326b1\") " pod="openshift-must-gather-kknjn/crc-debug-t7n9k" Feb 04 12:36:34 crc kubenswrapper[4728]: I0204 12:36:34.746631 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/crc-debug-t7n9k" Feb 04 12:36:34 crc kubenswrapper[4728]: W0204 12:36:34.790244 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29afef70_d824_4c2b_b5fd_2b344a8326b1.slice/crio-e71ea6025c8ff83620484818911e35493daab3276cc827c1e9589bbc221d76f2 WatchSource:0}: Error finding container e71ea6025c8ff83620484818911e35493daab3276cc827c1e9589bbc221d76f2: Status 404 returned error can't find the container with id e71ea6025c8ff83620484818911e35493daab3276cc827c1e9589bbc221d76f2 Feb 04 12:36:35 crc kubenswrapper[4728]: I0204 12:36:35.035007 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kknjn/crc-debug-t7n9k" event={"ID":"29afef70-d824-4c2b-b5fd-2b344a8326b1","Type":"ContainerStarted","Data":"e71ea6025c8ff83620484818911e35493daab3276cc827c1e9589bbc221d76f2"} Feb 04 12:36:36 crc kubenswrapper[4728]: I0204 12:36:36.045645 4728 generic.go:334] "Generic (PLEG): container finished" podID="29afef70-d824-4c2b-b5fd-2b344a8326b1" containerID="ab8aabbf5125e85d36d73e1a9858c2de488257e05640194f7512d3ac153988ee" exitCode=1 Feb 04 12:36:36 crc kubenswrapper[4728]: I0204 12:36:36.045875 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kknjn/crc-debug-t7n9k" event={"ID":"29afef70-d824-4c2b-b5fd-2b344a8326b1","Type":"ContainerDied","Data":"ab8aabbf5125e85d36d73e1a9858c2de488257e05640194f7512d3ac153988ee"} Feb 04 12:36:36 crc kubenswrapper[4728]: I0204 12:36:36.108702 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kknjn/crc-debug-t7n9k"] Feb 04 12:36:36 crc kubenswrapper[4728]: I0204 12:36:36.130964 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kknjn/crc-debug-t7n9k"] Feb 04 12:36:37 crc kubenswrapper[4728]: I0204 12:36:37.592650 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/crc-debug-t7n9k" Feb 04 12:36:37 crc kubenswrapper[4728]: I0204 12:36:37.767102 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcv5w\" (UniqueName: \"kubernetes.io/projected/29afef70-d824-4c2b-b5fd-2b344a8326b1-kube-api-access-hcv5w\") pod \"29afef70-d824-4c2b-b5fd-2b344a8326b1\" (UID: \"29afef70-d824-4c2b-b5fd-2b344a8326b1\") " Feb 04 12:36:37 crc kubenswrapper[4728]: I0204 12:36:37.767182 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29afef70-d824-4c2b-b5fd-2b344a8326b1-host\") pod \"29afef70-d824-4c2b-b5fd-2b344a8326b1\" (UID: \"29afef70-d824-4c2b-b5fd-2b344a8326b1\") " Feb 04 12:36:37 crc kubenswrapper[4728]: I0204 12:36:37.767309 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29afef70-d824-4c2b-b5fd-2b344a8326b1-host" (OuterVolumeSpecName: "host") pod "29afef70-d824-4c2b-b5fd-2b344a8326b1" (UID: "29afef70-d824-4c2b-b5fd-2b344a8326b1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 04 12:36:37 crc kubenswrapper[4728]: I0204 12:36:37.768063 4728 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29afef70-d824-4c2b-b5fd-2b344a8326b1-host\") on node \"crc\" DevicePath \"\"" Feb 04 12:36:37 crc kubenswrapper[4728]: I0204 12:36:37.773040 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29afef70-d824-4c2b-b5fd-2b344a8326b1-kube-api-access-hcv5w" (OuterVolumeSpecName: "kube-api-access-hcv5w") pod "29afef70-d824-4c2b-b5fd-2b344a8326b1" (UID: "29afef70-d824-4c2b-b5fd-2b344a8326b1"). InnerVolumeSpecName "kube-api-access-hcv5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:36:37 crc kubenswrapper[4728]: I0204 12:36:37.870297 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcv5w\" (UniqueName: \"kubernetes.io/projected/29afef70-d824-4c2b-b5fd-2b344a8326b1-kube-api-access-hcv5w\") on node \"crc\" DevicePath \"\"" Feb 04 12:36:38 crc kubenswrapper[4728]: I0204 12:36:38.069451 4728 scope.go:117] "RemoveContainer" containerID="ab8aabbf5125e85d36d73e1a9858c2de488257e05640194f7512d3ac153988ee" Feb 04 12:36:38 crc kubenswrapper[4728]: I0204 12:36:38.069515 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/crc-debug-t7n9k" Feb 04 12:36:39 crc kubenswrapper[4728]: I0204 12:36:39.565255 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29afef70-d824-4c2b-b5fd-2b344a8326b1" path="/var/lib/kubelet/pods/29afef70-d824-4c2b-b5fd-2b344a8326b1/volumes" Feb 04 12:37:27 crc kubenswrapper[4728]: I0204 12:37:27.695865 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_bd40a8f1-c4bd-4c7f-b80e-708802b76a25/init-config-reloader/0.log" Feb 04 12:37:27 crc kubenswrapper[4728]: I0204 12:37:27.930666 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_bd40a8f1-c4bd-4c7f-b80e-708802b76a25/init-config-reloader/0.log" Feb 04 12:37:27 crc kubenswrapper[4728]: I0204 12:37:27.943532 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_bd40a8f1-c4bd-4c7f-b80e-708802b76a25/config-reloader/0.log" Feb 04 12:37:27 crc kubenswrapper[4728]: I0204 12:37:27.954861 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_bd40a8f1-c4bd-4c7f-b80e-708802b76a25/alertmanager/0.log" Feb 04 12:37:28 crc kubenswrapper[4728]: I0204 12:37:28.125602 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56d59c75d6-qgl25_ceeca3bd-824e-4b51-a536-6ae20911faa9/barbican-api/0.log" Feb 04 12:37:28 crc kubenswrapper[4728]: I0204 12:37:28.977427 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-597658645d-gglvr_a9a14765-4f8e-445d-a260-28ce443609b8/barbican-keystone-listener/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.001123 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56d59c75d6-qgl25_ceeca3bd-824e-4b51-a536-6ae20911faa9/barbican-api-log/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.015352 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-597658645d-gglvr_a9a14765-4f8e-445d-a260-28ce443609b8/barbican-keystone-listener-log/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.175301 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55445887dc-67kkb_8a809eec-ba73-4746-a976-e43e762c78c0/barbican-worker/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.229140 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55445887dc-67kkb_8a809eec-ba73-4746-a976-e43e762c78c0/barbican-worker-log/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.389030 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-h6wm5_56f043ba-0442-438f-80c8-64fc95caf1f0/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.439424 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a1c17abe-702e-43d8-99cc-4e0a1b932990/ceilometer-central-agent/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.496848 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a1c17abe-702e-43d8-99cc-4e0a1b932990/ceilometer-notification-agent/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.671313 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a1c17abe-702e-43d8-99cc-4e0a1b932990/sg-core/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.686988 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a1c17abe-702e-43d8-99cc-4e0a1b932990/proxy-httpd/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.732242 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_11d8f352-7a48-49a3-a5fc-91929d41faf8/cinder-api/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.896807 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_11d8f352-7a48-49a3-a5fc-91929d41faf8/cinder-api-log/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.912538 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_282fd0ba-8410-4d6f-bd7a-8715e0f9f8be/cinder-scheduler/0.log" Feb 04 12:37:29 crc kubenswrapper[4728]: I0204 12:37:29.954781 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_282fd0ba-8410-4d6f-bd7a-8715e0f9f8be/probe/0.log" Feb 04 12:37:30 crc kubenswrapper[4728]: I0204 12:37:30.097995 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-tzjg6_6e9765bf-d2ef-4596-ab24-046221ee1d97/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:30 crc kubenswrapper[4728]: I0204 12:37:30.161408 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-h6czk_2980ce97-200b-40eb-b084-817ac3a421ca/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:30 crc kubenswrapper[4728]: I0204 12:37:30.766449 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-5zqtn_8a7c3943-584c-4f0f-ad10-4030ee23df91/init/0.log" Feb 04 12:37:30 crc kubenswrapper[4728]: I0204 12:37:30.906404 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-5zqtn_8a7c3943-584c-4f0f-ad10-4030ee23df91/init/0.log" Feb 04 12:37:31 crc kubenswrapper[4728]: I0204 12:37:31.002999 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-5zqtn_8a7c3943-584c-4f0f-ad10-4030ee23df91/dnsmasq-dns/0.log" Feb 04 12:37:31 crc kubenswrapper[4728]: I0204 12:37:31.055386 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-vv577_068d923c-e3c2-4221-8a84-7af590000487/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:31 crc kubenswrapper[4728]: I0204 12:37:31.222859 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d9e7acb6-6488-4369-9c08-f3843af8169c/glance-log/0.log" Feb 04 12:37:31 crc kubenswrapper[4728]: I0204 12:37:31.282657 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d9e7acb6-6488-4369-9c08-f3843af8169c/glance-httpd/0.log" Feb 04 12:37:31 crc kubenswrapper[4728]: I0204 12:37:31.410722 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a/glance-httpd/0.log" Feb 04 12:37:31 crc kubenswrapper[4728]: I0204 12:37:31.482405 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6213c3f8-a9f9-4c15-b2ba-d0bfa7290e4a/glance-log/0.log" Feb 04 12:37:31 crc kubenswrapper[4728]: I0204 12:37:31.981236 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-546d7984c6-n6fdl_eec7f6cd-431d-4a8f-8850-27aeb6a18f37/heat-api/0.log" Feb 04 12:37:32 crc kubenswrapper[4728]: I0204 12:37:32.018183 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7bd84699b9-9ldwf_47ec0af4-1025-4dec-9270-86a8ad62ba47/heat-engine/0.log" Feb 04 12:37:32 crc kubenswrapper[4728]: I0204 12:37:32.049227 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fb2sz_a127564d-8974-4dff-9963-d143e45e07f9/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:32 crc kubenswrapper[4728]: I0204 12:37:32.213005 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6bf54fd9cd-l9msv_5a350ee5-d239-42fc-9665-b07c506eb400/heat-cfnapi/0.log" Feb 04 12:37:32 crc kubenswrapper[4728]: I0204 12:37:32.245509 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-sppp5_4e045846-4329-41b7-8a9c-eb84a2231443/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:32 crc kubenswrapper[4728]: I0204 12:37:32.638874 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e33c5244-2507-465e-8565-bfbc216f6382/kube-state-metrics/0.log" Feb 04 12:37:32 crc kubenswrapper[4728]: I0204 12:37:32.740499 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29503441-7qplb_04c63d3d-617e-4b87-aa1f-1093a356ca44/keystone-cron/0.log" Feb 04 12:37:32 crc kubenswrapper[4728]: I0204 12:37:32.789556 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74d87669cb-xsvws_74a69221-1b4f-4ac9-bec8-82fd3cb462c9/keystone-api/0.log" Feb 04 12:37:32 crc kubenswrapper[4728]: I0204 12:37:32.880650 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-kcfmk_2350eb57-6059-4c44-8213-0472e0295ae5/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:33 crc kubenswrapper[4728]: I0204 12:37:33.085317 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84679c4c57-hc428_58949fe9-f572-4c71-80c8-925cee89421e/neutron-api/0.log" Feb 04 12:37:33 crc kubenswrapper[4728]: I0204 12:37:33.138645 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84679c4c57-hc428_58949fe9-f572-4c71-80c8-925cee89421e/neutron-httpd/0.log" Feb 04 12:37:33 crc kubenswrapper[4728]: I0204 12:37:33.267827 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-rn5wz_aee1c19a-e4cd-4457-a13a-434722c5d516/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:33 crc kubenswrapper[4728]: I0204 12:37:33.599529 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_63d40f77-97e6-4954-9b50-4d2c6032b5b8/nova-api-log/0.log" Feb 04 12:37:33 crc kubenswrapper[4728]: I0204 12:37:33.732904 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5cfef0c5-481e-49c4-b2e3-f37222f7aa50/nova-cell0-conductor-conductor/0.log" Feb 04 12:37:33 crc kubenswrapper[4728]: I0204 12:37:33.816310 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_63d40f77-97e6-4954-9b50-4d2c6032b5b8/nova-api-api/0.log" Feb 04 12:37:33 crc kubenswrapper[4728]: I0204 12:37:33.926954 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a095a869-5d4b-4061-b13e-3d3f7f4c27ba/nova-cell1-conductor-conductor/0.log" Feb 04 12:37:34 crc kubenswrapper[4728]: I0204 12:37:34.067494 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_eb3d9dc1-c306-408a-ae51-d025cf731399/nova-cell1-novncproxy-novncproxy/0.log" Feb 04 12:37:34 crc kubenswrapper[4728]: I0204 12:37:34.256974 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-qsdd4_52c00cc4-57a9-41e7-99f0-1cc6fa0442e2/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:34 crc kubenswrapper[4728]: I0204 12:37:34.395522 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8ea985e9-30fb-4e8e-8fd9-29c156245bfd/nova-metadata-log/0.log" Feb 04 12:37:34 crc kubenswrapper[4728]: I0204 12:37:34.682308 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_36828b26-ee22-4926-82ba-21d3c7be7f6d/nova-scheduler-scheduler/0.log" Feb 04 12:37:34 crc kubenswrapper[4728]: I0204 12:37:34.916069 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a6e91a91-91b5-4617-9ba2-16e77e144334/mysql-bootstrap/0.log" Feb 04 12:37:35 crc kubenswrapper[4728]: I0204 12:37:35.133019 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a6e91a91-91b5-4617-9ba2-16e77e144334/galera/0.log" Feb 04 12:37:35 crc kubenswrapper[4728]: I0204 12:37:35.133178 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a6e91a91-91b5-4617-9ba2-16e77e144334/mysql-bootstrap/0.log" Feb 04 12:37:35 crc kubenswrapper[4728]: I0204 12:37:35.329594 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_da6f384a-b651-4e8c-b17b-355d35b4e5a8/mysql-bootstrap/0.log" Feb 04 12:37:35 crc kubenswrapper[4728]: I0204 12:37:35.576340 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_da6f384a-b651-4e8c-b17b-355d35b4e5a8/mysql-bootstrap/0.log" Feb 04 12:37:35 crc kubenswrapper[4728]: I0204 12:37:35.631022 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_da6f384a-b651-4e8c-b17b-355d35b4e5a8/galera/0.log" Feb 04 12:37:35 crc kubenswrapper[4728]: I0204 12:37:35.843445 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-znkl4_00063022-f33f-4668-a588-e2b677acfda1/openstack-network-exporter/0.log" Feb 04 12:37:35 crc kubenswrapper[4728]: I0204 12:37:35.848378 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f6dfc933-2564-456a-ad35-ae3bf8afdbd3/openstackclient/0.log" Feb 04 12:37:35 crc kubenswrapper[4728]: I0204 12:37:35.863756 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8ea985e9-30fb-4e8e-8fd9-29c156245bfd/nova-metadata-metadata/0.log" Feb 04 12:37:36 crc kubenswrapper[4728]: I0204 12:37:36.088777 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mf6rw_faf427b7-1198-4ca8-9873-dc531a2bc572/ovsdb-server-init/0.log" Feb 04 12:37:36 crc kubenswrapper[4728]: I0204 12:37:36.251255 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mf6rw_faf427b7-1198-4ca8-9873-dc531a2bc572/ovsdb-server-init/0.log" Feb 04 12:37:36 crc kubenswrapper[4728]: I0204 12:37:36.269516 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mf6rw_faf427b7-1198-4ca8-9873-dc531a2bc572/ovs-vswitchd/0.log" Feb 04 12:37:36 crc kubenswrapper[4728]: I0204 12:37:36.365703 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mf6rw_faf427b7-1198-4ca8-9873-dc531a2bc572/ovsdb-server/0.log" Feb 04 12:37:36 crc kubenswrapper[4728]: I0204 12:37:36.492935 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-q2pd5_6c7c1adf-4c02-42b4-997d-291a7d033983/ovn-controller/0.log" Feb 04 12:37:36 crc kubenswrapper[4728]: I0204 12:37:36.638392 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cf667_7dc36402-dfbd-4f2b-a604-b24331482d0e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:36 crc kubenswrapper[4728]: I0204 12:37:36.694625 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7e2fbe12-58b0-4438-a274-68040a4ec197/openstack-network-exporter/0.log" Feb 04 12:37:36 crc kubenswrapper[4728]: I0204 12:37:36.763052 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7e2fbe12-58b0-4438-a274-68040a4ec197/ovn-northd/0.log" Feb 04 12:37:36 crc kubenswrapper[4728]: I0204 12:37:36.889263 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1f594403-9f70-4fa5-81ef-3b0e5f5d98e4/openstack-network-exporter/0.log" Feb 04 12:37:36 crc kubenswrapper[4728]: I0204 12:37:36.957005 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1f594403-9f70-4fa5-81ef-3b0e5f5d98e4/ovsdbserver-nb/0.log" Feb 04 12:37:37 crc kubenswrapper[4728]: I0204 12:37:37.121808 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2fe072a1-7563-4a3a-b52f-6dacc6771099/openstack-network-exporter/0.log" Feb 04 12:37:37 crc kubenswrapper[4728]: I0204 12:37:37.133773 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2fe072a1-7563-4a3a-b52f-6dacc6771099/ovsdbserver-sb/0.log" Feb 04 12:37:37 crc kubenswrapper[4728]: I0204 12:37:37.349298 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85f76984f4-b8kmh_340bbc5d-d2d4-48cc-bf4b-6f2454d9819a/placement-api/0.log" Feb 04 12:37:37 crc kubenswrapper[4728]: I0204 12:37:37.418956 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85f76984f4-b8kmh_340bbc5d-d2d4-48cc-bf4b-6f2454d9819a/placement-log/0.log" Feb 04 12:37:37 crc kubenswrapper[4728]: I0204 12:37:37.469625 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34765442-96a6-4824-912b-1d94f7d2a4c3/init-config-reloader/0.log" Feb 04 12:37:37 crc kubenswrapper[4728]: I0204 12:37:37.667547 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34765442-96a6-4824-912b-1d94f7d2a4c3/init-config-reloader/0.log" Feb 04 12:37:37 crc kubenswrapper[4728]: I0204 12:37:37.713957 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34765442-96a6-4824-912b-1d94f7d2a4c3/prometheus/0.log" Feb 04 12:37:37 crc kubenswrapper[4728]: I0204 12:37:37.719156 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34765442-96a6-4824-912b-1d94f7d2a4c3/config-reloader/0.log" Feb 04 12:37:37 crc kubenswrapper[4728]: I0204 12:37:37.750774 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_34765442-96a6-4824-912b-1d94f7d2a4c3/thanos-sidecar/0.log" Feb 04 12:37:37 crc kubenswrapper[4728]: I0204 12:37:37.942311 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f8dc874e-ea4b-47a5-9f00-d1633fb509ba/setup-container/0.log" Feb 04 12:37:38 crc kubenswrapper[4728]: I0204 12:37:38.168027 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f8dc874e-ea4b-47a5-9f00-d1633fb509ba/rabbitmq/0.log" Feb 04 12:37:38 crc kubenswrapper[4728]: I0204 12:37:38.249263 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f8dc874e-ea4b-47a5-9f00-d1633fb509ba/setup-container/0.log" Feb 04 12:37:38 crc kubenswrapper[4728]: I0204 12:37:38.281153 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2f73f795-7173-4835-b233-b78a4bd41854/setup-container/0.log" Feb 04 12:37:38 crc kubenswrapper[4728]: I0204 12:37:38.504882 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2f73f795-7173-4835-b233-b78a4bd41854/rabbitmq/0.log" Feb 04 12:37:38 crc kubenswrapper[4728]: I0204 12:37:38.543397 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7klbl_3e1c830b-d255-42d9-843e-80bee025b267/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:38 crc kubenswrapper[4728]: I0204 12:37:38.551119 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2f73f795-7173-4835-b233-b78a4bd41854/setup-container/0.log" Feb 04 12:37:39 crc kubenswrapper[4728]: I0204 12:37:39.357225 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-gllch_84be5405-8879-4346-aeed-c55e106b37f7/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:39 crc kubenswrapper[4728]: I0204 12:37:39.373551 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-78pdt_8e03d68e-aab9-4abb-969e-649efb0dc80a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:39 crc kubenswrapper[4728]: I0204 12:37:39.594294 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-zxkjl_d18d7dd8-c0a7-4f82-87d5-415841b53578/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:39 crc kubenswrapper[4728]: I0204 12:37:39.761813 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-bt5ct_7d180f53-1c00-4628-bd93-c3b5646307fd/ssh-known-hosts-edpm-deployment/0.log" Feb 04 12:37:39 crc kubenswrapper[4728]: I0204 12:37:39.887106 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55db5db6dc-zsx22_b0a1bd86-4ac9-4cc5-af9b-447cf553f266/proxy-server/0.log" Feb 04 12:37:40 crc kubenswrapper[4728]: I0204 12:37:40.005623 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55db5db6dc-zsx22_b0a1bd86-4ac9-4cc5-af9b-447cf553f266/proxy-httpd/0.log" Feb 04 12:37:40 crc kubenswrapper[4728]: I0204 12:37:40.009634 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-kjwnv_2cc0cb1f-508e-4ac0-b653-aeb03317bdd7/swift-ring-rebalance/0.log" Feb 04 12:37:40 crc kubenswrapper[4728]: I0204 12:37:40.168367 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/account-auditor/0.log" Feb 04 12:37:40 crc kubenswrapper[4728]: I0204 12:37:40.234020 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/account-reaper/0.log" Feb 04 12:37:40 crc kubenswrapper[4728]: I0204 12:37:40.277083 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/account-replicator/0.log" Feb 04 12:37:40 crc kubenswrapper[4728]: I0204 12:37:40.946932 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/account-server/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.007545 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/container-server/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.014995 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/container-auditor/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.050683 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/container-replicator/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.177405 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/container-updater/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.222506 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/object-expirer/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.302814 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/object-auditor/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.308152 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/object-replicator/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.426626 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/object-server/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.438154 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/object-updater/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.529611 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/swift-recon-cron/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.573314 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea4f2286-1f91-46b5-98af-0ca776207d16/rsync/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.831831 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-dbbmz_4bf728e8-5290-463c-bd6a-194f0ddb3c5d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:41 crc kubenswrapper[4728]: I0204 12:37:41.849897 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9hnwc_a257135e-ed50-4619-adf8-c7a29970062c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 04 12:37:50 crc kubenswrapper[4728]: I0204 12:37:50.229538 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f6e14837-5f91-48dd-ab9c-8fad208e9d88/memcached/0.log" Feb 04 12:38:05 crc kubenswrapper[4728]: I0204 12:38:05.447976 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:38:05 crc kubenswrapper[4728]: I0204 12:38:05.448593 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:38:12 crc kubenswrapper[4728]: I0204 12:38:12.336732 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw_a5f60744-0d28-4cbf-978a-f3cc15df91cf/util/0.log" Feb 04 12:38:12 crc kubenswrapper[4728]: I0204 12:38:12.413252 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw_a5f60744-0d28-4cbf-978a-f3cc15df91cf/util/0.log" Feb 04 12:38:12 crc kubenswrapper[4728]: I0204 12:38:12.478209 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw_a5f60744-0d28-4cbf-978a-f3cc15df91cf/pull/0.log" Feb 04 12:38:12 crc kubenswrapper[4728]: I0204 12:38:12.523620 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw_a5f60744-0d28-4cbf-978a-f3cc15df91cf/pull/0.log" Feb 04 12:38:12 crc kubenswrapper[4728]: I0204 12:38:12.722932 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw_a5f60744-0d28-4cbf-978a-f3cc15df91cf/extract/0.log" Feb 04 12:38:12 crc kubenswrapper[4728]: I0204 12:38:12.767689 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw_a5f60744-0d28-4cbf-978a-f3cc15df91cf/util/0.log" Feb 04 12:38:12 crc kubenswrapper[4728]: I0204 12:38:12.784628 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a878602390e07c9665c279db9d4c916a31d8c5531dee5f63401f166eads22sw_a5f60744-0d28-4cbf-978a-f3cc15df91cf/pull/0.log" Feb 04 12:38:12 crc kubenswrapper[4728]: I0204 12:38:12.986603 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-57ztz_0760f0c3-0076-4be3-8b2e-2dc9fcf0d929/manager/0.log" Feb 04 12:38:13 crc kubenswrapper[4728]: I0204 12:38:13.070435 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-b5scl_e8b0005b-18c6-4701-b22f-41d0127becf7/manager/0.log" Feb 04 12:38:13 crc kubenswrapper[4728]: I0204 12:38:13.204638 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-rvp9b_f488ccbd-9346-4fd7-bfce-f7e5375f9100/manager/0.log" Feb 04 12:38:13 crc kubenswrapper[4728]: I0204 12:38:13.414234 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-cbzrl_3a514d11-28a0-4a17-9714-7a8d60216402/manager/0.log" Feb 04 12:38:13 crc kubenswrapper[4728]: I0204 12:38:13.509961 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-mfshf_3829b622-23b9-4160-8875-b2c310b3b531/manager/0.log" Feb 04 12:38:13 crc kubenswrapper[4728]: I0204 12:38:13.686036 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-848zn_92f15a6a-b8bc-470b-9558-72b958a8c32b/manager/0.log" Feb 04 12:38:13 crc kubenswrapper[4728]: I0204 12:38:13.901169 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-phlht_624d3845-dd5b-46eb-80cc-5a587a812d78/manager/0.log" Feb 04 12:38:14 crc kubenswrapper[4728]: I0204 12:38:14.094621 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-78rpz_b6c7167f-86c2-4e7e-8699-24f3932124ab/manager/0.log" Feb 04 12:38:14 crc kubenswrapper[4728]: I0204 12:38:14.190937 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-qt7k8_29a70c36-efb8-40bc-89ec-68d20f9cf253/manager/0.log" Feb 04 12:38:14 crc kubenswrapper[4728]: I0204 12:38:14.312625 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-44sgj_698c89f3-b4ef-443f-bce4-f1fe2fdbc1c7/manager/0.log" Feb 04 12:38:14 crc kubenswrapper[4728]: I0204 12:38:14.413521 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-hq55j_4e4f1e2f-ac6a-4dce-a074-2637e53f35a7/manager/0.log" Feb 04 12:38:14 crc kubenswrapper[4728]: I0204 12:38:14.729437 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-r7hvc_cb387892-df64-4339-abd3-925fce438123/manager/0.log" Feb 04 12:38:14 crc kubenswrapper[4728]: I0204 12:38:14.859538 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-f7fb7_6d88ab1e-b850-444e-90b2-05b6e311178e/manager/0.log" Feb 04 12:38:14 crc kubenswrapper[4728]: I0204 12:38:14.956304 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-cghq5_c833a690-9e25-4bbe-9d81-5d9cddbc7279/manager/0.log" Feb 04 12:38:15 crc kubenswrapper[4728]: I0204 12:38:15.049098 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dl677q_dc74eb23-85aa-4df4-8273-0af9a0a37dda/manager/0.log" Feb 04 12:38:15 crc kubenswrapper[4728]: I0204 12:38:15.401037 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5957c4869f-7wljg_97827c0e-98ed-4486-9ee8-918dc6df645b/operator/0.log" Feb 04 12:38:15 crc kubenswrapper[4728]: I0204 12:38:15.529318 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-24b52_4c417533-a48b-4aaf-a428-6844c84b9845/registry-server/0.log" Feb 04 12:38:15 crc kubenswrapper[4728]: I0204 12:38:15.795332 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-pmpvx_0f328a26-b914-49b2-9124-b12b968232dd/manager/0.log" Feb 04 12:38:15 crc kubenswrapper[4728]: I0204 12:38:15.953074 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-bmrq5_86e535af-d713-4a58-80c0-0ce6a464f666/manager/0.log" Feb 04 12:38:16 crc kubenswrapper[4728]: I0204 12:38:16.136273 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xzrsm_18e15914-8bd3-42e9-9c5b-f973b203ece8/operator/0.log" Feb 04 12:38:16 crc kubenswrapper[4728]: I0204 12:38:16.379463 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-hflqw_b7384fe1-ae77-4f08-ad0e-e5fcc55f8d81/manager/0.log" Feb 04 12:38:16 crc kubenswrapper[4728]: I0204 12:38:16.654869 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-wrk5q_505cc508-1a1d-44d9-9067-ca0c376e6522/manager/0.log" Feb 04 12:38:16 crc kubenswrapper[4728]: I0204 12:38:16.684665 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-67db8bbf87-ffl8t_03f4099d-cbdc-4884-a85a-2ffe82d616d1/manager/0.log" Feb 04 12:38:16 crc kubenswrapper[4728]: I0204 12:38:16.703156 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6dcb54f59-lnlx2_3b49d7d8-7c63-482c-b882-25c01e798afe/manager/0.log" Feb 04 12:38:16 crc kubenswrapper[4728]: I0204 12:38:16.820640 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-cf2v5_9fe9a75e-2006-4143-a451-e135b2d68297/manager/0.log" Feb 04 12:38:35 crc kubenswrapper[4728]: I0204 12:38:35.448432 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:38:35 crc kubenswrapper[4728]: I0204 12:38:35.448994 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:38:39 crc kubenswrapper[4728]: I0204 12:38:39.967615 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-cvhf8_cb2c7326-dd1e-481c-ad3f-c8f884d636b1/control-plane-machine-set-operator/0.log" Feb 04 12:38:40 crc kubenswrapper[4728]: I0204 12:38:40.038782 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r576m_1f802986-f97c-4813-9aec-d48d43eeedae/kube-rbac-proxy/0.log" Feb 04 12:38:40 crc kubenswrapper[4728]: I0204 12:38:40.166687 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r576m_1f802986-f97c-4813-9aec-d48d43eeedae/machine-api-operator/0.log" Feb 04 12:38:53 crc kubenswrapper[4728]: I0204 12:38:53.192100 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-9qsm6_58edae62-bdc6-49b2-89bc-a2e0ff5184d6/cert-manager-controller/0.log" Feb 04 12:38:53 crc kubenswrapper[4728]: I0204 12:38:53.315905 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-5d4gb_0985f153-e731-4ee3-8c41-315179a557dc/cert-manager-cainjector/0.log" Feb 04 12:38:53 crc kubenswrapper[4728]: I0204 12:38:53.428823 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4qfrz_cc113635-ef4c-4427-b4f8-92def9d5c19f/cert-manager-webhook/0.log" Feb 04 12:39:05 crc kubenswrapper[4728]: I0204 12:39:05.372472 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6f874f9768-trc2v_a9d1039a-1431-4b01-ac8a-173aba063825/nmstate-console-plugin/0.log" Feb 04 12:39:05 crc kubenswrapper[4728]: I0204 12:39:05.449204 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:39:05 crc kubenswrapper[4728]: I0204 12:39:05.449277 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 04 12:39:05 crc kubenswrapper[4728]: I0204 12:39:05.449325 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" Feb 04 12:39:05 crc kubenswrapper[4728]: I0204 12:39:05.450179 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e"} pod="openshift-machine-config-operator/machine-config-daemon-grzvj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 04 12:39:05 crc kubenswrapper[4728]: I0204 12:39:05.450254 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" containerID="cri-o://f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" gracePeriod=600 Feb 04 12:39:05 crc kubenswrapper[4728]: I0204 12:39:05.544000 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-l4f7z_2ae14687-d092-4364-b8d2-f97b412741f0/nmstate-handler/0.log" Feb 04 12:39:05 crc kubenswrapper[4728]: E0204 12:39:05.577122 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:39:05 crc kubenswrapper[4728]: I0204 12:39:05.591990 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-677949fd65-qpw7t_4bd9f410-cd23-48ce-b65b-04573f621b0c/kube-rbac-proxy/0.log" Feb 04 12:39:05 crc kubenswrapper[4728]: I0204 12:39:05.703599 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-677949fd65-qpw7t_4bd9f410-cd23-48ce-b65b-04573f621b0c/nmstate-metrics/0.log" Feb 04 12:39:05 crc kubenswrapper[4728]: I0204 12:39:05.807829 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-57bf49857b-27rqq_67a8e926-007f-41c2-aace-07706b07e072/nmstate-operator/0.log" Feb 04 12:39:05 crc kubenswrapper[4728]: I0204 12:39:05.886990 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-bd5678b45-jppz6_860f14fe-36e8-42e9-be2a-26ab378c6436/nmstate-webhook/0.log" Feb 04 12:39:06 crc kubenswrapper[4728]: I0204 12:39:06.456913 4728 generic.go:334] "Generic (PLEG): container finished" podID="3c8409df-def9-46a0-a813-6788ddf1e292" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" exitCode=0 Feb 04 12:39:06 crc kubenswrapper[4728]: I0204 12:39:06.456953 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerDied","Data":"f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e"} Feb 04 12:39:06 crc kubenswrapper[4728]: I0204 12:39:06.456984 4728 scope.go:117] "RemoveContainer" containerID="d3827ef241d57a2fe47b067be7ffbf7b5cdcdfe8ffcb18ff63fee0c7bc2e54cd" Feb 04 12:39:06 crc kubenswrapper[4728]: I0204 12:39:06.457622 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:39:06 crc kubenswrapper[4728]: E0204 12:39:06.457956 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:39:17 crc kubenswrapper[4728]: I0204 12:39:17.553569 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:39:17 crc kubenswrapper[4728]: E0204 12:39:17.554519 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:39:21 crc kubenswrapper[4728]: I0204 12:39:21.359346 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-pld8h_9679f4b5-ea5b-4998-92e0-08fd965f9b7f/prometheus-operator/0.log" Feb 04 12:39:21 crc kubenswrapper[4728]: I0204 12:39:21.647666 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr_39c2f459-1049-49f1-9010-39b354d6f9e9/prometheus-operator-admission-webhook/0.log" Feb 04 12:39:21 crc kubenswrapper[4728]: I0204 12:39:21.665220 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6_6d8e0076-8a70-44f0-a7c4-25c1a70a1e89/prometheus-operator-admission-webhook/0.log" Feb 04 12:39:21 crc kubenswrapper[4728]: I0204 12:39:21.855335 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6x4mh_58b528f7-d9c7-4cde-b7d0-4197972ef92a/operator/0.log" Feb 04 12:39:21 crc kubenswrapper[4728]: I0204 12:39:21.931315 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-xdtdm_5cf5a02e-7b7f-453a-9336-c4d98f8470e6/perses-operator/0.log" Feb 04 12:39:28 crc kubenswrapper[4728]: I0204 12:39:28.554276 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:39:28 crc kubenswrapper[4728]: E0204 12:39:28.555074 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:39:37 crc kubenswrapper[4728]: I0204 12:39:37.148279 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-9c48fdfd-5w5qt_bea37505-bec7-466d-a718-00720e7102e8/kube-rbac-proxy/0.log" Feb 04 12:39:37 crc kubenswrapper[4728]: I0204 12:39:37.358193 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-9c48fdfd-5w5qt_bea37505-bec7-466d-a718-00720e7102e8/controller/0.log" Feb 04 12:39:37 crc kubenswrapper[4728]: I0204 12:39:37.401268 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-97dfd4f9f-jv5kf_135aeb72-0473-4fa8-b594-c933ad100216/frr-k8s-webhook-server/0.log" Feb 04 12:39:37 crc kubenswrapper[4728]: I0204 12:39:37.538804 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-frr-files/0.log" Feb 04 12:39:37 crc kubenswrapper[4728]: I0204 12:39:37.746978 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-reloader/0.log" Feb 04 12:39:37 crc kubenswrapper[4728]: I0204 12:39:37.771065 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-reloader/0.log" Feb 04 12:39:37 crc kubenswrapper[4728]: I0204 12:39:37.790465 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-frr-files/0.log" Feb 04 12:39:37 crc kubenswrapper[4728]: I0204 12:39:37.828923 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-metrics/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.001775 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-reloader/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.003925 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-metrics/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.007872 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-frr-files/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.057152 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-metrics/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.269391 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-frr-files/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.273620 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-reloader/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.327292 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/controller/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.342321 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/cp-metrics/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.495521 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/frr-metrics/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.552249 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/kube-rbac-proxy/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.607632 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/kube-rbac-proxy-frr/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.802379 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/reloader/0.log" Feb 04 12:39:38 crc kubenswrapper[4728]: I0204 12:39:38.956110 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7ffd8d88fd-dfn9h_c5066fff-a329-4c7b-a70f-cee08caa3393/manager/0.log" Feb 04 12:39:39 crc kubenswrapper[4728]: I0204 12:39:39.073005 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-868877877f-4z2fv_1cc0e7f3-e154-4508-b731-34c7b2e5cd6e/webhook-server/0.log" Feb 04 12:39:39 crc kubenswrapper[4728]: I0204 12:39:39.408447 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cmpf4_5d51181d-348f-4581-8429-b8bbb614d0e7/kube-rbac-proxy/0.log" Feb 04 12:39:39 crc kubenswrapper[4728]: I0204 12:39:39.895837 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cmpf4_5d51181d-348f-4581-8429-b8bbb614d0e7/speaker/0.log" Feb 04 12:39:40 crc kubenswrapper[4728]: I0204 12:39:40.240935 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wk247_eb981a9b-06b1-47a4-aa97-3d46980d3769/frr/0.log" Feb 04 12:39:41 crc kubenswrapper[4728]: I0204 12:39:41.559125 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:39:41 crc kubenswrapper[4728]: E0204 12:39:41.559858 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:39:54 crc kubenswrapper[4728]: I0204 12:39:54.553955 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:39:54 crc kubenswrapper[4728]: E0204 12:39:54.554782 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:39:55 crc kubenswrapper[4728]: I0204 12:39:55.157673 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn_3f46d5c6-6a3b-45e4-8be8-c23e04c24afa/util/0.log" Feb 04 12:39:55 crc kubenswrapper[4728]: I0204 12:39:55.304872 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn_3f46d5c6-6a3b-45e4-8be8-c23e04c24afa/util/0.log" Feb 04 12:39:55 crc kubenswrapper[4728]: I0204 12:39:55.808496 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn_3f46d5c6-6a3b-45e4-8be8-c23e04c24afa/pull/0.log" Feb 04 12:39:55 crc kubenswrapper[4728]: I0204 12:39:55.822956 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn_3f46d5c6-6a3b-45e4-8be8-c23e04c24afa/pull/0.log" Feb 04 12:39:56 crc kubenswrapper[4728]: I0204 12:39:56.083964 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn_3f46d5c6-6a3b-45e4-8be8-c23e04c24afa/util/0.log" Feb 04 12:39:56 crc kubenswrapper[4728]: I0204 12:39:56.088311 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn_3f46d5c6-6a3b-45e4-8be8-c23e04c24afa/extract/0.log" Feb 04 12:39:56 crc kubenswrapper[4728]: I0204 12:39:56.098995 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_36e0ca81f14c45fd6b476a38b8aaa3d10fa215eed96d7e5f11f824cb8744qqn_3f46d5c6-6a3b-45e4-8be8-c23e04c24afa/pull/0.log" Feb 04 12:39:56 crc kubenswrapper[4728]: I0204 12:39:56.291901 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp_00be6ffa-f38d-4321-bc01-9f7fe4d03ed6/util/0.log" Feb 04 12:39:56 crc kubenswrapper[4728]: I0204 12:39:56.479145 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp_00be6ffa-f38d-4321-bc01-9f7fe4d03ed6/util/0.log" Feb 04 12:39:56 crc kubenswrapper[4728]: I0204 12:39:56.481195 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp_00be6ffa-f38d-4321-bc01-9f7fe4d03ed6/pull/0.log" Feb 04 12:39:56 crc kubenswrapper[4728]: I0204 12:39:56.481441 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp_00be6ffa-f38d-4321-bc01-9f7fe4d03ed6/pull/0.log" Feb 04 12:39:56 crc kubenswrapper[4728]: I0204 12:39:56.674389 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp_00be6ffa-f38d-4321-bc01-9f7fe4d03ed6/pull/0.log" Feb 04 12:39:56 crc kubenswrapper[4728]: I0204 12:39:56.717940 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp_00be6ffa-f38d-4321-bc01-9f7fe4d03ed6/extract/0.log" Feb 04 12:39:56 crc kubenswrapper[4728]: I0204 12:39:56.732799 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sqcp_00be6ffa-f38d-4321-bc01-9f7fe4d03ed6/util/0.log" Feb 04 12:39:56 crc kubenswrapper[4728]: I0204 12:39:56.940674 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt_23817eeb-0169-4a0b-bc08-5d83377e17b2/util/0.log" Feb 04 12:39:57 crc kubenswrapper[4728]: I0204 12:39:57.262269 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt_23817eeb-0169-4a0b-bc08-5d83377e17b2/pull/0.log" Feb 04 12:39:57 crc kubenswrapper[4728]: I0204 12:39:57.289259 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt_23817eeb-0169-4a0b-bc08-5d83377e17b2/util/0.log" Feb 04 12:39:57 crc kubenswrapper[4728]: I0204 12:39:57.301284 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt_23817eeb-0169-4a0b-bc08-5d83377e17b2/pull/0.log" Feb 04 12:39:57 crc kubenswrapper[4728]: I0204 12:39:57.522275 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt_23817eeb-0169-4a0b-bc08-5d83377e17b2/util/0.log" Feb 04 12:39:57 crc kubenswrapper[4728]: I0204 12:39:57.543779 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt_23817eeb-0169-4a0b-bc08-5d83377e17b2/extract/0.log" Feb 04 12:39:57 crc kubenswrapper[4728]: I0204 12:39:57.604220 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b3b4b1b538a18740f8639e533bbe75ea2d3029144c2cc3cb07d4049e58sztdt_23817eeb-0169-4a0b-bc08-5d83377e17b2/pull/0.log" Feb 04 12:39:57 crc kubenswrapper[4728]: I0204 12:39:57.741097 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46hsv_20519b63-2a55-4fd8-8ed9-d964eca67d43/extract-utilities/0.log" Feb 04 12:39:57 crc kubenswrapper[4728]: I0204 12:39:57.951117 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46hsv_20519b63-2a55-4fd8-8ed9-d964eca67d43/extract-utilities/0.log" Feb 04 12:39:57 crc kubenswrapper[4728]: I0204 12:39:57.951704 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46hsv_20519b63-2a55-4fd8-8ed9-d964eca67d43/extract-content/0.log" Feb 04 12:39:57 crc kubenswrapper[4728]: I0204 12:39:57.969897 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46hsv_20519b63-2a55-4fd8-8ed9-d964eca67d43/extract-content/0.log" Feb 04 12:39:58 crc kubenswrapper[4728]: I0204 12:39:58.128435 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46hsv_20519b63-2a55-4fd8-8ed9-d964eca67d43/extract-content/0.log" Feb 04 12:39:58 crc kubenswrapper[4728]: I0204 12:39:58.135896 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46hsv_20519b63-2a55-4fd8-8ed9-d964eca67d43/extract-utilities/0.log" Feb 04 12:39:58 crc kubenswrapper[4728]: I0204 12:39:58.330407 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-htcv9_0e4e9ddb-3fa7-445f-9a8a-662816c760c7/extract-utilities/0.log" Feb 04 12:39:58 crc kubenswrapper[4728]: I0204 12:39:58.584485 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-htcv9_0e4e9ddb-3fa7-445f-9a8a-662816c760c7/extract-utilities/0.log" Feb 04 12:39:58 crc kubenswrapper[4728]: I0204 12:39:58.650323 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-htcv9_0e4e9ddb-3fa7-445f-9a8a-662816c760c7/extract-content/0.log" Feb 04 12:39:58 crc kubenswrapper[4728]: I0204 12:39:58.697629 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-htcv9_0e4e9ddb-3fa7-445f-9a8a-662816c760c7/extract-content/0.log" Feb 04 12:39:58 crc kubenswrapper[4728]: I0204 12:39:58.850226 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-htcv9_0e4e9ddb-3fa7-445f-9a8a-662816c760c7/extract-utilities/0.log" Feb 04 12:39:58 crc kubenswrapper[4728]: I0204 12:39:58.873196 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-htcv9_0e4e9ddb-3fa7-445f-9a8a-662816c760c7/extract-content/0.log" Feb 04 12:39:59 crc kubenswrapper[4728]: I0204 12:39:59.138460 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8zzsd_cc15bd74-2783-4922-bb1b-9d4b38b5f3ed/marketplace-operator/0.log" Feb 04 12:39:59 crc kubenswrapper[4728]: I0204 12:39:59.261268 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-smtjc_d17c89da-8862-4f3b-a801-2c56fabe069d/extract-utilities/0.log" Feb 04 12:39:59 crc kubenswrapper[4728]: I0204 12:39:59.271950 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46hsv_20519b63-2a55-4fd8-8ed9-d964eca67d43/registry-server/0.log" Feb 04 12:39:59 crc kubenswrapper[4728]: I0204 12:39:59.555312 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-smtjc_d17c89da-8862-4f3b-a801-2c56fabe069d/extract-utilities/0.log" Feb 04 12:39:59 crc kubenswrapper[4728]: I0204 12:39:59.578141 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-smtjc_d17c89da-8862-4f3b-a801-2c56fabe069d/extract-content/0.log" Feb 04 12:39:59 crc kubenswrapper[4728]: I0204 12:39:59.666129 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-smtjc_d17c89da-8862-4f3b-a801-2c56fabe069d/extract-content/0.log" Feb 04 12:39:59 crc kubenswrapper[4728]: I0204 12:39:59.818110 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-htcv9_0e4e9ddb-3fa7-445f-9a8a-662816c760c7/registry-server/0.log" Feb 04 12:39:59 crc kubenswrapper[4728]: I0204 12:39:59.854245 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-smtjc_d17c89da-8862-4f3b-a801-2c56fabe069d/extract-utilities/0.log" Feb 04 12:39:59 crc kubenswrapper[4728]: I0204 12:39:59.861936 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-smtjc_d17c89da-8862-4f3b-a801-2c56fabe069d/extract-content/0.log" Feb 04 12:40:00 crc kubenswrapper[4728]: I0204 12:40:00.023952 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9rjz_adee048d-8c91-4807-bb6f-84bb4af68a27/extract-utilities/0.log" Feb 04 12:40:00 crc kubenswrapper[4728]: I0204 12:40:00.106181 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-smtjc_d17c89da-8862-4f3b-a801-2c56fabe069d/registry-server/0.log" Feb 04 12:40:00 crc kubenswrapper[4728]: I0204 12:40:00.249655 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9rjz_adee048d-8c91-4807-bb6f-84bb4af68a27/extract-content/0.log" Feb 04 12:40:00 crc kubenswrapper[4728]: I0204 12:40:00.256171 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9rjz_adee048d-8c91-4807-bb6f-84bb4af68a27/extract-content/0.log" Feb 04 12:40:00 crc kubenswrapper[4728]: I0204 12:40:00.271101 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9rjz_adee048d-8c91-4807-bb6f-84bb4af68a27/extract-utilities/0.log" Feb 04 12:40:00 crc kubenswrapper[4728]: I0204 12:40:00.456402 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9rjz_adee048d-8c91-4807-bb6f-84bb4af68a27/extract-utilities/0.log" Feb 04 12:40:00 crc kubenswrapper[4728]: I0204 12:40:00.503487 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9rjz_adee048d-8c91-4807-bb6f-84bb4af68a27/extract-content/0.log" Feb 04 12:40:01 crc kubenswrapper[4728]: I0204 12:40:01.161946 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g9rjz_adee048d-8c91-4807-bb6f-84bb4af68a27/registry-server/0.log" Feb 04 12:40:09 crc kubenswrapper[4728]: I0204 12:40:09.559206 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:40:09 crc kubenswrapper[4728]: E0204 12:40:09.559917 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:40:14 crc kubenswrapper[4728]: I0204 12:40:14.360707 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-pld8h_9679f4b5-ea5b-4998-92e0-08fd965f9b7f/prometheus-operator/0.log" Feb 04 12:40:14 crc kubenswrapper[4728]: I0204 12:40:14.375759 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-694c6ff88-8bwjr_39c2f459-1049-49f1-9010-39b354d6f9e9/prometheus-operator-admission-webhook/0.log" Feb 04 12:40:14 crc kubenswrapper[4728]: I0204 12:40:14.430291 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-694c6ff88-mnsn6_6d8e0076-8a70-44f0-a7c4-25c1a70a1e89/prometheus-operator-admission-webhook/0.log" Feb 04 12:40:14 crc kubenswrapper[4728]: I0204 12:40:14.548882 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-xdtdm_5cf5a02e-7b7f-453a-9336-c4d98f8470e6/perses-operator/0.log" Feb 04 12:40:14 crc kubenswrapper[4728]: I0204 12:40:14.582457 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6x4mh_58b528f7-d9c7-4cde-b7d0-4197972ef92a/operator/0.log" Feb 04 12:40:19 crc kubenswrapper[4728]: E0204 12:40:19.829469 4728 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.128:55432->38.102.83.128:40117: read tcp 38.102.83.128:55432->38.102.83.128:40117: read: connection reset by peer Feb 04 12:40:20 crc kubenswrapper[4728]: I0204 12:40:20.553989 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:40:20 crc kubenswrapper[4728]: E0204 12:40:20.554307 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.472165 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bbl57"] Feb 04 12:40:24 crc kubenswrapper[4728]: E0204 12:40:24.473296 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29afef70-d824-4c2b-b5fd-2b344a8326b1" containerName="container-00" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.473315 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="29afef70-d824-4c2b-b5fd-2b344a8326b1" containerName="container-00" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.473593 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="29afef70-d824-4c2b-b5fd-2b344a8326b1" containerName="container-00" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.475399 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.488815 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbl57"] Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.596379 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-catalog-content\") pod \"certified-operators-bbl57\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.596464 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-utilities\") pod \"certified-operators-bbl57\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.596512 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ptzf\" (UniqueName: \"kubernetes.io/projected/71a515a3-99fd-4816-9c75-3e8890d02746-kube-api-access-8ptzf\") pod \"certified-operators-bbl57\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.697894 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-catalog-content\") pod \"certified-operators-bbl57\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.697955 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-utilities\") pod \"certified-operators-bbl57\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.697991 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ptzf\" (UniqueName: \"kubernetes.io/projected/71a515a3-99fd-4816-9c75-3e8890d02746-kube-api-access-8ptzf\") pod \"certified-operators-bbl57\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.699128 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-catalog-content\") pod \"certified-operators-bbl57\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.699384 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-utilities\") pod \"certified-operators-bbl57\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.742607 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ptzf\" (UniqueName: \"kubernetes.io/projected/71a515a3-99fd-4816-9c75-3e8890d02746-kube-api-access-8ptzf\") pod \"certified-operators-bbl57\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:24 crc kubenswrapper[4728]: I0204 12:40:24.817002 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:25 crc kubenswrapper[4728]: I0204 12:40:25.401550 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbl57"] Feb 04 12:40:25 crc kubenswrapper[4728]: W0204 12:40:25.567769 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71a515a3_99fd_4816_9c75_3e8890d02746.slice/crio-5a0b5dfc4c709ce2986016c9d95301901d09f2f4612d15f5bcf35b568db13898 WatchSource:0}: Error finding container 5a0b5dfc4c709ce2986016c9d95301901d09f2f4612d15f5bcf35b568db13898: Status 404 returned error can't find the container with id 5a0b5dfc4c709ce2986016c9d95301901d09f2f4612d15f5bcf35b568db13898 Feb 04 12:40:26 crc kubenswrapper[4728]: I0204 12:40:26.178562 4728 generic.go:334] "Generic (PLEG): container finished" podID="71a515a3-99fd-4816-9c75-3e8890d02746" containerID="d8ca2478a78f35406b4f54daf62994ed501ebcc5938019387301009afe5a83d0" exitCode=0 Feb 04 12:40:26 crc kubenswrapper[4728]: I0204 12:40:26.178629 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbl57" event={"ID":"71a515a3-99fd-4816-9c75-3e8890d02746","Type":"ContainerDied","Data":"d8ca2478a78f35406b4f54daf62994ed501ebcc5938019387301009afe5a83d0"} Feb 04 12:40:26 crc kubenswrapper[4728]: I0204 12:40:26.178907 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbl57" event={"ID":"71a515a3-99fd-4816-9c75-3e8890d02746","Type":"ContainerStarted","Data":"5a0b5dfc4c709ce2986016c9d95301901d09f2f4612d15f5bcf35b568db13898"} Feb 04 12:40:28 crc kubenswrapper[4728]: I0204 12:40:28.199385 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbl57" event={"ID":"71a515a3-99fd-4816-9c75-3e8890d02746","Type":"ContainerStarted","Data":"97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f"} Feb 04 12:40:33 crc kubenswrapper[4728]: I0204 12:40:33.555466 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:40:33 crc kubenswrapper[4728]: E0204 12:40:33.556130 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:40:35 crc kubenswrapper[4728]: I0204 12:40:35.261504 4728 generic.go:334] "Generic (PLEG): container finished" podID="71a515a3-99fd-4816-9c75-3e8890d02746" containerID="97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f" exitCode=0 Feb 04 12:40:35 crc kubenswrapper[4728]: I0204 12:40:35.261587 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbl57" event={"ID":"71a515a3-99fd-4816-9c75-3e8890d02746","Type":"ContainerDied","Data":"97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f"} Feb 04 12:40:35 crc kubenswrapper[4728]: I0204 12:40:35.265028 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 12:40:37 crc kubenswrapper[4728]: I0204 12:40:37.290105 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbl57" event={"ID":"71a515a3-99fd-4816-9c75-3e8890d02746","Type":"ContainerStarted","Data":"b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0"} Feb 04 12:40:44 crc kubenswrapper[4728]: I0204 12:40:44.817937 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:44 crc kubenswrapper[4728]: I0204 12:40:44.818585 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:44 crc kubenswrapper[4728]: I0204 12:40:44.872228 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:44 crc kubenswrapper[4728]: I0204 12:40:44.892274 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bbl57" podStartSLOduration=10.918112278 podStartE2EDuration="20.892256444s" podCreationTimestamp="2026-02-04 12:40:24 +0000 UTC" firstStartedPulling="2026-02-04 12:40:26.180304037 +0000 UTC m=+4375.323008422" lastFinishedPulling="2026-02-04 12:40:36.154448203 +0000 UTC m=+4385.297152588" observedRunningTime="2026-02-04 12:40:37.318592361 +0000 UTC m=+4386.461296766" watchObservedRunningTime="2026-02-04 12:40:44.892256444 +0000 UTC m=+4394.034960829" Feb 04 12:40:45 crc kubenswrapper[4728]: I0204 12:40:45.418515 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:46 crc kubenswrapper[4728]: I0204 12:40:46.553998 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:40:46 crc kubenswrapper[4728]: E0204 12:40:46.554277 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:40:47 crc kubenswrapper[4728]: I0204 12:40:47.511656 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbl57"] Feb 04 12:40:47 crc kubenswrapper[4728]: I0204 12:40:47.512200 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bbl57" podUID="71a515a3-99fd-4816-9c75-3e8890d02746" containerName="registry-server" containerID="cri-o://b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0" gracePeriod=2 Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.023018 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.194303 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-utilities\") pod \"71a515a3-99fd-4816-9c75-3e8890d02746\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.194667 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-catalog-content\") pod \"71a515a3-99fd-4816-9c75-3e8890d02746\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.194803 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ptzf\" (UniqueName: \"kubernetes.io/projected/71a515a3-99fd-4816-9c75-3e8890d02746-kube-api-access-8ptzf\") pod \"71a515a3-99fd-4816-9c75-3e8890d02746\" (UID: \"71a515a3-99fd-4816-9c75-3e8890d02746\") " Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.195070 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-utilities" (OuterVolumeSpecName: "utilities") pod "71a515a3-99fd-4816-9c75-3e8890d02746" (UID: "71a515a3-99fd-4816-9c75-3e8890d02746"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.195599 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.205025 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a515a3-99fd-4816-9c75-3e8890d02746-kube-api-access-8ptzf" (OuterVolumeSpecName: "kube-api-access-8ptzf") pod "71a515a3-99fd-4816-9c75-3e8890d02746" (UID: "71a515a3-99fd-4816-9c75-3e8890d02746"). InnerVolumeSpecName "kube-api-access-8ptzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.247121 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71a515a3-99fd-4816-9c75-3e8890d02746" (UID: "71a515a3-99fd-4816-9c75-3e8890d02746"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.297156 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71a515a3-99fd-4816-9c75-3e8890d02746-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.297195 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ptzf\" (UniqueName: \"kubernetes.io/projected/71a515a3-99fd-4816-9c75-3e8890d02746-kube-api-access-8ptzf\") on node \"crc\" DevicePath \"\"" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.413153 4728 generic.go:334] "Generic (PLEG): container finished" podID="71a515a3-99fd-4816-9c75-3e8890d02746" containerID="b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0" exitCode=0 Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.413376 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbl57" event={"ID":"71a515a3-99fd-4816-9c75-3e8890d02746","Type":"ContainerDied","Data":"b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0"} Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.413501 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbl57" event={"ID":"71a515a3-99fd-4816-9c75-3e8890d02746","Type":"ContainerDied","Data":"5a0b5dfc4c709ce2986016c9d95301901d09f2f4612d15f5bcf35b568db13898"} Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.413536 4728 scope.go:117] "RemoveContainer" containerID="b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.413422 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbl57" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.450501 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbl57"] Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.450598 4728 scope.go:117] "RemoveContainer" containerID="97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.458501 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bbl57"] Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.472603 4728 scope.go:117] "RemoveContainer" containerID="d8ca2478a78f35406b4f54daf62994ed501ebcc5938019387301009afe5a83d0" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.511406 4728 scope.go:117] "RemoveContainer" containerID="b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0" Feb 04 12:40:48 crc kubenswrapper[4728]: E0204 12:40:48.512018 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0\": container with ID starting with b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0 not found: ID does not exist" containerID="b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.512060 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0"} err="failed to get container status \"b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0\": rpc error: code = NotFound desc = could not find container \"b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0\": container with ID starting with b7045e57c73016050697da55e6bfea9ed9e77cdf09ebf2265f2d6d36a74f47d0 not found: ID does not exist" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.512091 4728 scope.go:117] "RemoveContainer" containerID="97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f" Feb 04 12:40:48 crc kubenswrapper[4728]: E0204 12:40:48.512423 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f\": container with ID starting with 97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f not found: ID does not exist" containerID="97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.512447 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f"} err="failed to get container status \"97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f\": rpc error: code = NotFound desc = could not find container \"97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f\": container with ID starting with 97edbd7c6dc485831b286145a9731f915a5f3bb4bb8441879c1749a3ea303e4f not found: ID does not exist" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.512459 4728 scope.go:117] "RemoveContainer" containerID="d8ca2478a78f35406b4f54daf62994ed501ebcc5938019387301009afe5a83d0" Feb 04 12:40:48 crc kubenswrapper[4728]: E0204 12:40:48.512804 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ca2478a78f35406b4f54daf62994ed501ebcc5938019387301009afe5a83d0\": container with ID starting with d8ca2478a78f35406b4f54daf62994ed501ebcc5938019387301009afe5a83d0 not found: ID does not exist" containerID="d8ca2478a78f35406b4f54daf62994ed501ebcc5938019387301009afe5a83d0" Feb 04 12:40:48 crc kubenswrapper[4728]: I0204 12:40:48.512837 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ca2478a78f35406b4f54daf62994ed501ebcc5938019387301009afe5a83d0"} err="failed to get container status \"d8ca2478a78f35406b4f54daf62994ed501ebcc5938019387301009afe5a83d0\": rpc error: code = NotFound desc = could not find container \"d8ca2478a78f35406b4f54daf62994ed501ebcc5938019387301009afe5a83d0\": container with ID starting with d8ca2478a78f35406b4f54daf62994ed501ebcc5938019387301009afe5a83d0 not found: ID does not exist" Feb 04 12:40:49 crc kubenswrapper[4728]: I0204 12:40:49.564416 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a515a3-99fd-4816-9c75-3e8890d02746" path="/var/lib/kubelet/pods/71a515a3-99fd-4816-9c75-3e8890d02746/volumes" Feb 04 12:41:01 crc kubenswrapper[4728]: I0204 12:41:01.562949 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:41:01 crc kubenswrapper[4728]: E0204 12:41:01.563965 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:41:13 crc kubenswrapper[4728]: I0204 12:41:13.553911 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:41:13 crc kubenswrapper[4728]: E0204 12:41:13.554890 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:41:25 crc kubenswrapper[4728]: I0204 12:41:25.556890 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:41:25 crc kubenswrapper[4728]: E0204 12:41:25.557789 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:41:36 crc kubenswrapper[4728]: I0204 12:41:36.554297 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:41:36 crc kubenswrapper[4728]: E0204 12:41:36.555134 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:41:49 crc kubenswrapper[4728]: I0204 12:41:49.554635 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:41:49 crc kubenswrapper[4728]: E0204 12:41:49.555672 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:41:56 crc kubenswrapper[4728]: I0204 12:41:56.044933 4728 generic.go:334] "Generic (PLEG): container finished" podID="bdb7756f-ba5a-4b21-b273-33044aa95835" containerID="8d3734f3d9bffd6e6d55e27a08a05213b43eef5033e8d5daaeb23976aac2e57d" exitCode=0 Feb 04 12:41:56 crc kubenswrapper[4728]: I0204 12:41:56.045262 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kknjn/must-gather-xgvtw" event={"ID":"bdb7756f-ba5a-4b21-b273-33044aa95835","Type":"ContainerDied","Data":"8d3734f3d9bffd6e6d55e27a08a05213b43eef5033e8d5daaeb23976aac2e57d"} Feb 04 12:41:56 crc kubenswrapper[4728]: I0204 12:41:56.046255 4728 scope.go:117] "RemoveContainer" containerID="8d3734f3d9bffd6e6d55e27a08a05213b43eef5033e8d5daaeb23976aac2e57d" Feb 04 12:41:56 crc kubenswrapper[4728]: I0204 12:41:56.288063 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kknjn_must-gather-xgvtw_bdb7756f-ba5a-4b21-b273-33044aa95835/gather/0.log" Feb 04 12:42:02 crc kubenswrapper[4728]: I0204 12:42:02.554053 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:42:02 crc kubenswrapper[4728]: E0204 12:42:02.554849 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:42:04 crc kubenswrapper[4728]: I0204 12:42:04.979807 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kknjn/must-gather-xgvtw"] Feb 04 12:42:04 crc kubenswrapper[4728]: I0204 12:42:04.980664 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kknjn/must-gather-xgvtw" podUID="bdb7756f-ba5a-4b21-b273-33044aa95835" containerName="copy" containerID="cri-o://2b3e2bbc5df57c9e1180017ef981495d5d09d6e7610db1314fc97b4e50432499" gracePeriod=2 Feb 04 12:42:04 crc kubenswrapper[4728]: I0204 12:42:04.989082 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kknjn/must-gather-xgvtw"] Feb 04 12:42:05 crc kubenswrapper[4728]: I0204 12:42:05.145871 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kknjn_must-gather-xgvtw_bdb7756f-ba5a-4b21-b273-33044aa95835/copy/0.log" Feb 04 12:42:05 crc kubenswrapper[4728]: I0204 12:42:05.146387 4728 generic.go:334] "Generic (PLEG): container finished" podID="bdb7756f-ba5a-4b21-b273-33044aa95835" containerID="2b3e2bbc5df57c9e1180017ef981495d5d09d6e7610db1314fc97b4e50432499" exitCode=143 Feb 04 12:42:05 crc kubenswrapper[4728]: I0204 12:42:05.547502 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kknjn_must-gather-xgvtw_bdb7756f-ba5a-4b21-b273-33044aa95835/copy/0.log" Feb 04 12:42:05 crc kubenswrapper[4728]: I0204 12:42:05.548545 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/must-gather-xgvtw" Feb 04 12:42:05 crc kubenswrapper[4728]: I0204 12:42:05.620944 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7wtr\" (UniqueName: \"kubernetes.io/projected/bdb7756f-ba5a-4b21-b273-33044aa95835-kube-api-access-j7wtr\") pod \"bdb7756f-ba5a-4b21-b273-33044aa95835\" (UID: \"bdb7756f-ba5a-4b21-b273-33044aa95835\") " Feb 04 12:42:05 crc kubenswrapper[4728]: I0204 12:42:05.621027 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bdb7756f-ba5a-4b21-b273-33044aa95835-must-gather-output\") pod \"bdb7756f-ba5a-4b21-b273-33044aa95835\" (UID: \"bdb7756f-ba5a-4b21-b273-33044aa95835\") " Feb 04 12:42:05 crc kubenswrapper[4728]: I0204 12:42:05.637685 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdb7756f-ba5a-4b21-b273-33044aa95835-kube-api-access-j7wtr" (OuterVolumeSpecName: "kube-api-access-j7wtr") pod "bdb7756f-ba5a-4b21-b273-33044aa95835" (UID: "bdb7756f-ba5a-4b21-b273-33044aa95835"). InnerVolumeSpecName "kube-api-access-j7wtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:42:05 crc kubenswrapper[4728]: I0204 12:42:05.723884 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7wtr\" (UniqueName: \"kubernetes.io/projected/bdb7756f-ba5a-4b21-b273-33044aa95835-kube-api-access-j7wtr\") on node \"crc\" DevicePath \"\"" Feb 04 12:42:05 crc kubenswrapper[4728]: I0204 12:42:05.784343 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdb7756f-ba5a-4b21-b273-33044aa95835-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bdb7756f-ba5a-4b21-b273-33044aa95835" (UID: "bdb7756f-ba5a-4b21-b273-33044aa95835"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:42:05 crc kubenswrapper[4728]: I0204 12:42:05.826801 4728 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bdb7756f-ba5a-4b21-b273-33044aa95835-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 04 12:42:06 crc kubenswrapper[4728]: I0204 12:42:06.157456 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kknjn_must-gather-xgvtw_bdb7756f-ba5a-4b21-b273-33044aa95835/copy/0.log" Feb 04 12:42:06 crc kubenswrapper[4728]: I0204 12:42:06.157795 4728 scope.go:117] "RemoveContainer" containerID="2b3e2bbc5df57c9e1180017ef981495d5d09d6e7610db1314fc97b4e50432499" Feb 04 12:42:06 crc kubenswrapper[4728]: I0204 12:42:06.157908 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kknjn/must-gather-xgvtw" Feb 04 12:42:06 crc kubenswrapper[4728]: I0204 12:42:06.214814 4728 scope.go:117] "RemoveContainer" containerID="8d3734f3d9bffd6e6d55e27a08a05213b43eef5033e8d5daaeb23976aac2e57d" Feb 04 12:42:07 crc kubenswrapper[4728]: I0204 12:42:07.565908 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdb7756f-ba5a-4b21-b273-33044aa95835" path="/var/lib/kubelet/pods/bdb7756f-ba5a-4b21-b273-33044aa95835/volumes" Feb 04 12:42:16 crc kubenswrapper[4728]: I0204 12:42:16.553971 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:42:16 crc kubenswrapper[4728]: E0204 12:42:16.554896 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:42:30 crc kubenswrapper[4728]: I0204 12:42:30.555202 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:42:30 crc kubenswrapper[4728]: E0204 12:42:30.555927 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:42:45 crc kubenswrapper[4728]: I0204 12:42:45.554996 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:42:45 crc kubenswrapper[4728]: E0204 12:42:45.556025 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:42:59 crc kubenswrapper[4728]: I0204 12:42:59.554296 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:42:59 crc kubenswrapper[4728]: E0204 12:42:59.555102 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:43:12 crc kubenswrapper[4728]: I0204 12:43:12.553780 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:43:12 crc kubenswrapper[4728]: E0204 12:43:12.554575 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:43:24 crc kubenswrapper[4728]: I0204 12:43:24.554900 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:43:24 crc kubenswrapper[4728]: E0204 12:43:24.555570 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:43:38 crc kubenswrapper[4728]: I0204 12:43:38.554146 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:43:38 crc kubenswrapper[4728]: E0204 12:43:38.555082 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:43:53 crc kubenswrapper[4728]: I0204 12:43:53.554637 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:43:53 crc kubenswrapper[4728]: E0204 12:43:53.555585 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:44:04 crc kubenswrapper[4728]: I0204 12:44:04.555007 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:44:04 crc kubenswrapper[4728]: E0204 12:44:04.556359 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grzvj_openshift-machine-config-operator(3c8409df-def9-46a0-a813-6788ddf1e292)\"" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.644355 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k8jt4"] Feb 04 12:44:13 crc kubenswrapper[4728]: E0204 12:44:13.645423 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a515a3-99fd-4816-9c75-3e8890d02746" containerName="extract-utilities" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.645441 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a515a3-99fd-4816-9c75-3e8890d02746" containerName="extract-utilities" Feb 04 12:44:13 crc kubenswrapper[4728]: E0204 12:44:13.645451 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a515a3-99fd-4816-9c75-3e8890d02746" containerName="registry-server" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.645458 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a515a3-99fd-4816-9c75-3e8890d02746" containerName="registry-server" Feb 04 12:44:13 crc kubenswrapper[4728]: E0204 12:44:13.645506 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a515a3-99fd-4816-9c75-3e8890d02746" containerName="extract-content" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.645515 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a515a3-99fd-4816-9c75-3e8890d02746" containerName="extract-content" Feb 04 12:44:13 crc kubenswrapper[4728]: E0204 12:44:13.645531 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb7756f-ba5a-4b21-b273-33044aa95835" containerName="copy" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.645539 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb7756f-ba5a-4b21-b273-33044aa95835" containerName="copy" Feb 04 12:44:13 crc kubenswrapper[4728]: E0204 12:44:13.645559 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb7756f-ba5a-4b21-b273-33044aa95835" containerName="gather" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.645567 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb7756f-ba5a-4b21-b273-33044aa95835" containerName="gather" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.645823 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a515a3-99fd-4816-9c75-3e8890d02746" containerName="registry-server" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.645858 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdb7756f-ba5a-4b21-b273-33044aa95835" containerName="copy" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.645873 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdb7756f-ba5a-4b21-b273-33044aa95835" containerName="gather" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.647698 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.672880 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8jt4"] Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.748667 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbvpn\" (UniqueName: \"kubernetes.io/projected/106d5276-446c-4b6a-8c5e-a716d739e899-kube-api-access-pbvpn\") pod \"redhat-operators-k8jt4\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.748733 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-utilities\") pod \"redhat-operators-k8jt4\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.748815 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-catalog-content\") pod \"redhat-operators-k8jt4\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.850628 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbvpn\" (UniqueName: \"kubernetes.io/projected/106d5276-446c-4b6a-8c5e-a716d739e899-kube-api-access-pbvpn\") pod \"redhat-operators-k8jt4\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.850680 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-utilities\") pod \"redhat-operators-k8jt4\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.850740 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-catalog-content\") pod \"redhat-operators-k8jt4\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.851544 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-catalog-content\") pod \"redhat-operators-k8jt4\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.851654 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-utilities\") pod \"redhat-operators-k8jt4\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.882790 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbvpn\" (UniqueName: \"kubernetes.io/projected/106d5276-446c-4b6a-8c5e-a716d739e899-kube-api-access-pbvpn\") pod \"redhat-operators-k8jt4\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:13 crc kubenswrapper[4728]: I0204 12:44:13.980233 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:14 crc kubenswrapper[4728]: I0204 12:44:14.487930 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8jt4"] Feb 04 12:44:15 crc kubenswrapper[4728]: I0204 12:44:15.400589 4728 generic.go:334] "Generic (PLEG): container finished" podID="106d5276-446c-4b6a-8c5e-a716d739e899" containerID="1125388796b2a52c54b5503327c3c194f0e869926b17c2f7e1a31fb3191a2b8c" exitCode=0 Feb 04 12:44:15 crc kubenswrapper[4728]: I0204 12:44:15.400944 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8jt4" event={"ID":"106d5276-446c-4b6a-8c5e-a716d739e899","Type":"ContainerDied","Data":"1125388796b2a52c54b5503327c3c194f0e869926b17c2f7e1a31fb3191a2b8c"} Feb 04 12:44:15 crc kubenswrapper[4728]: I0204 12:44:15.400982 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8jt4" event={"ID":"106d5276-446c-4b6a-8c5e-a716d739e899","Type":"ContainerStarted","Data":"8445d74b56286f19ea704608476ca31553096bf595b40cec81502cde226153b4"} Feb 04 12:44:18 crc kubenswrapper[4728]: I0204 12:44:18.427960 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8jt4" event={"ID":"106d5276-446c-4b6a-8c5e-a716d739e899","Type":"ContainerStarted","Data":"dfae8a2c2c8bf66d6ce88336e31c3dd69d4c89b72a559b969ca8f3c376384b96"} Feb 04 12:44:19 crc kubenswrapper[4728]: I0204 12:44:19.553498 4728 scope.go:117] "RemoveContainer" containerID="f7bfb9a440064726419cdfb5a6636de25b8e9656887dbf6874b2b550fef8521e" Feb 04 12:44:20 crc kubenswrapper[4728]: I0204 12:44:20.448454 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" event={"ID":"3c8409df-def9-46a0-a813-6788ddf1e292","Type":"ContainerStarted","Data":"ef0142427a58cb14b7b9317e9eac67421480668ba95e5ec96b985700ce5421bd"} Feb 04 12:44:31 crc kubenswrapper[4728]: I0204 12:44:31.552839 4728 generic.go:334] "Generic (PLEG): container finished" podID="106d5276-446c-4b6a-8c5e-a716d739e899" containerID="dfae8a2c2c8bf66d6ce88336e31c3dd69d4c89b72a559b969ca8f3c376384b96" exitCode=0 Feb 04 12:44:31 crc kubenswrapper[4728]: I0204 12:44:31.564997 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8jt4" event={"ID":"106d5276-446c-4b6a-8c5e-a716d739e899","Type":"ContainerDied","Data":"dfae8a2c2c8bf66d6ce88336e31c3dd69d4c89b72a559b969ca8f3c376384b96"} Feb 04 12:44:34 crc kubenswrapper[4728]: I0204 12:44:34.583086 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8jt4" event={"ID":"106d5276-446c-4b6a-8c5e-a716d739e899","Type":"ContainerStarted","Data":"3dd3bdfeadfe42c3e2a9e621acc44ab456a8a1dcc2fbb4a6227a2e01e135343e"} Feb 04 12:44:34 crc kubenswrapper[4728]: I0204 12:44:34.602042 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k8jt4" podStartSLOduration=3.540166492 podStartE2EDuration="21.60201966s" podCreationTimestamp="2026-02-04 12:44:13 +0000 UTC" firstStartedPulling="2026-02-04 12:44:15.402637885 +0000 UTC m=+4604.545342270" lastFinishedPulling="2026-02-04 12:44:33.464491053 +0000 UTC m=+4622.607195438" observedRunningTime="2026-02-04 12:44:34.600560064 +0000 UTC m=+4623.743264459" watchObservedRunningTime="2026-02-04 12:44:34.60201966 +0000 UTC m=+4623.744724065" Feb 04 12:44:43 crc kubenswrapper[4728]: I0204 12:44:43.986085 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:43 crc kubenswrapper[4728]: I0204 12:44:43.986707 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:44 crc kubenswrapper[4728]: I0204 12:44:44.034256 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:44 crc kubenswrapper[4728]: I0204 12:44:44.766769 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:44 crc kubenswrapper[4728]: I0204 12:44:44.840659 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8jt4"] Feb 04 12:44:46 crc kubenswrapper[4728]: I0204 12:44:46.740036 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k8jt4" podUID="106d5276-446c-4b6a-8c5e-a716d739e899" containerName="registry-server" containerID="cri-o://3dd3bdfeadfe42c3e2a9e621acc44ab456a8a1dcc2fbb4a6227a2e01e135343e" gracePeriod=2 Feb 04 12:44:47 crc kubenswrapper[4728]: I0204 12:44:47.759614 4728 generic.go:334] "Generic (PLEG): container finished" podID="106d5276-446c-4b6a-8c5e-a716d739e899" containerID="3dd3bdfeadfe42c3e2a9e621acc44ab456a8a1dcc2fbb4a6227a2e01e135343e" exitCode=0 Feb 04 12:44:47 crc kubenswrapper[4728]: I0204 12:44:47.759787 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8jt4" event={"ID":"106d5276-446c-4b6a-8c5e-a716d739e899","Type":"ContainerDied","Data":"3dd3bdfeadfe42c3e2a9e621acc44ab456a8a1dcc2fbb4a6227a2e01e135343e"} Feb 04 12:44:47 crc kubenswrapper[4728]: I0204 12:44:47.851637 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:47 crc kubenswrapper[4728]: I0204 12:44:47.969448 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-catalog-content\") pod \"106d5276-446c-4b6a-8c5e-a716d739e899\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " Feb 04 12:44:47 crc kubenswrapper[4728]: I0204 12:44:47.969726 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbvpn\" (UniqueName: \"kubernetes.io/projected/106d5276-446c-4b6a-8c5e-a716d739e899-kube-api-access-pbvpn\") pod \"106d5276-446c-4b6a-8c5e-a716d739e899\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " Feb 04 12:44:47 crc kubenswrapper[4728]: I0204 12:44:47.969829 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-utilities\") pod \"106d5276-446c-4b6a-8c5e-a716d739e899\" (UID: \"106d5276-446c-4b6a-8c5e-a716d739e899\") " Feb 04 12:44:47 crc kubenswrapper[4728]: I0204 12:44:47.971052 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-utilities" (OuterVolumeSpecName: "utilities") pod "106d5276-446c-4b6a-8c5e-a716d739e899" (UID: "106d5276-446c-4b6a-8c5e-a716d739e899"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:44:47 crc kubenswrapper[4728]: I0204 12:44:47.975953 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106d5276-446c-4b6a-8c5e-a716d739e899-kube-api-access-pbvpn" (OuterVolumeSpecName: "kube-api-access-pbvpn") pod "106d5276-446c-4b6a-8c5e-a716d739e899" (UID: "106d5276-446c-4b6a-8c5e-a716d739e899"). InnerVolumeSpecName "kube-api-access-pbvpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:44:48 crc kubenswrapper[4728]: I0204 12:44:48.075913 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbvpn\" (UniqueName: \"kubernetes.io/projected/106d5276-446c-4b6a-8c5e-a716d739e899-kube-api-access-pbvpn\") on node \"crc\" DevicePath \"\"" Feb 04 12:44:48 crc kubenswrapper[4728]: I0204 12:44:48.075974 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:44:48 crc kubenswrapper[4728]: I0204 12:44:48.093532 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "106d5276-446c-4b6a-8c5e-a716d739e899" (UID: "106d5276-446c-4b6a-8c5e-a716d739e899"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:44:48 crc kubenswrapper[4728]: I0204 12:44:48.177864 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/106d5276-446c-4b6a-8c5e-a716d739e899-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:44:48 crc kubenswrapper[4728]: I0204 12:44:48.774693 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8jt4" event={"ID":"106d5276-446c-4b6a-8c5e-a716d739e899","Type":"ContainerDied","Data":"8445d74b56286f19ea704608476ca31553096bf595b40cec81502cde226153b4"} Feb 04 12:44:48 crc kubenswrapper[4728]: I0204 12:44:48.774785 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8jt4" Feb 04 12:44:48 crc kubenswrapper[4728]: I0204 12:44:48.774909 4728 scope.go:117] "RemoveContainer" containerID="3dd3bdfeadfe42c3e2a9e621acc44ab456a8a1dcc2fbb4a6227a2e01e135343e" Feb 04 12:44:48 crc kubenswrapper[4728]: I0204 12:44:48.797005 4728 scope.go:117] "RemoveContainer" containerID="dfae8a2c2c8bf66d6ce88336e31c3dd69d4c89b72a559b969ca8f3c376384b96" Feb 04 12:44:48 crc kubenswrapper[4728]: I0204 12:44:48.824232 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8jt4"] Feb 04 12:44:48 crc kubenswrapper[4728]: I0204 12:44:48.836722 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k8jt4"] Feb 04 12:44:49 crc kubenswrapper[4728]: I0204 12:44:49.386597 4728 scope.go:117] "RemoveContainer" containerID="1125388796b2a52c54b5503327c3c194f0e869926b17c2f7e1a31fb3191a2b8c" Feb 04 12:44:49 crc kubenswrapper[4728]: I0204 12:44:49.569043 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="106d5276-446c-4b6a-8c5e-a716d739e899" path="/var/lib/kubelet/pods/106d5276-446c-4b6a-8c5e-a716d739e899/volumes" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.179726 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc"] Feb 04 12:45:00 crc kubenswrapper[4728]: E0204 12:45:00.180784 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106d5276-446c-4b6a-8c5e-a716d739e899" containerName="registry-server" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.180802 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="106d5276-446c-4b6a-8c5e-a716d739e899" containerName="registry-server" Feb 04 12:45:00 crc kubenswrapper[4728]: E0204 12:45:00.180814 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106d5276-446c-4b6a-8c5e-a716d739e899" containerName="extract-content" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.180821 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="106d5276-446c-4b6a-8c5e-a716d739e899" containerName="extract-content" Feb 04 12:45:00 crc kubenswrapper[4728]: E0204 12:45:00.180857 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106d5276-446c-4b6a-8c5e-a716d739e899" containerName="extract-utilities" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.180865 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="106d5276-446c-4b6a-8c5e-a716d739e899" containerName="extract-utilities" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.181180 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="106d5276-446c-4b6a-8c5e-a716d739e899" containerName="registry-server" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.181914 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.185699 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.186432 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.196640 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc"] Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.339068 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac50dc40-cbee-4c97-8e50-3e1380cee118-config-volume\") pod \"collect-profiles-29503485-8hxnc\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.339185 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac50dc40-cbee-4c97-8e50-3e1380cee118-secret-volume\") pod \"collect-profiles-29503485-8hxnc\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.339272 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw59j\" (UniqueName: \"kubernetes.io/projected/ac50dc40-cbee-4c97-8e50-3e1380cee118-kube-api-access-lw59j\") pod \"collect-profiles-29503485-8hxnc\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.441099 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac50dc40-cbee-4c97-8e50-3e1380cee118-secret-volume\") pod \"collect-profiles-29503485-8hxnc\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.441218 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw59j\" (UniqueName: \"kubernetes.io/projected/ac50dc40-cbee-4c97-8e50-3e1380cee118-kube-api-access-lw59j\") pod \"collect-profiles-29503485-8hxnc\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.441431 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac50dc40-cbee-4c97-8e50-3e1380cee118-config-volume\") pod \"collect-profiles-29503485-8hxnc\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.442480 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac50dc40-cbee-4c97-8e50-3e1380cee118-config-volume\") pod \"collect-profiles-29503485-8hxnc\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.853292 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac50dc40-cbee-4c97-8e50-3e1380cee118-secret-volume\") pod \"collect-profiles-29503485-8hxnc\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:00 crc kubenswrapper[4728]: I0204 12:45:00.854724 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw59j\" (UniqueName: \"kubernetes.io/projected/ac50dc40-cbee-4c97-8e50-3e1380cee118-kube-api-access-lw59j\") pod \"collect-profiles-29503485-8hxnc\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:01 crc kubenswrapper[4728]: I0204 12:45:01.113211 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:01 crc kubenswrapper[4728]: I0204 12:45:01.660911 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc"] Feb 04 12:45:01 crc kubenswrapper[4728]: I0204 12:45:01.893629 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" event={"ID":"ac50dc40-cbee-4c97-8e50-3e1380cee118","Type":"ContainerStarted","Data":"d70e837a5e7ed79d748776873e3c5876aab75f88678bdc7a769b6b9df8afd264"} Feb 04 12:45:02 crc kubenswrapper[4728]: I0204 12:45:02.903865 4728 generic.go:334] "Generic (PLEG): container finished" podID="ac50dc40-cbee-4c97-8e50-3e1380cee118" containerID="bb8a154625b88a44fcfdf234911eaaf79574e62818e733b782c1e7ede0679eba" exitCode=0 Feb 04 12:45:02 crc kubenswrapper[4728]: I0204 12:45:02.903919 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" event={"ID":"ac50dc40-cbee-4c97-8e50-3e1380cee118","Type":"ContainerDied","Data":"bb8a154625b88a44fcfdf234911eaaf79574e62818e733b782c1e7ede0679eba"} Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.266332 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.423932 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac50dc40-cbee-4c97-8e50-3e1380cee118-secret-volume\") pod \"ac50dc40-cbee-4c97-8e50-3e1380cee118\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.424115 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac50dc40-cbee-4c97-8e50-3e1380cee118-config-volume\") pod \"ac50dc40-cbee-4c97-8e50-3e1380cee118\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.424269 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw59j\" (UniqueName: \"kubernetes.io/projected/ac50dc40-cbee-4c97-8e50-3e1380cee118-kube-api-access-lw59j\") pod \"ac50dc40-cbee-4c97-8e50-3e1380cee118\" (UID: \"ac50dc40-cbee-4c97-8e50-3e1380cee118\") " Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.424815 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac50dc40-cbee-4c97-8e50-3e1380cee118-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac50dc40-cbee-4c97-8e50-3e1380cee118" (UID: "ac50dc40-cbee-4c97-8e50-3e1380cee118"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.430096 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac50dc40-cbee-4c97-8e50-3e1380cee118-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac50dc40-cbee-4c97-8e50-3e1380cee118" (UID: "ac50dc40-cbee-4c97-8e50-3e1380cee118"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.431550 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac50dc40-cbee-4c97-8e50-3e1380cee118-kube-api-access-lw59j" (OuterVolumeSpecName: "kube-api-access-lw59j") pod "ac50dc40-cbee-4c97-8e50-3e1380cee118" (UID: "ac50dc40-cbee-4c97-8e50-3e1380cee118"). InnerVolumeSpecName "kube-api-access-lw59j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.526207 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac50dc40-cbee-4c97-8e50-3e1380cee118-config-volume\") on node \"crc\" DevicePath \"\"" Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.526248 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw59j\" (UniqueName: \"kubernetes.io/projected/ac50dc40-cbee-4c97-8e50-3e1380cee118-kube-api-access-lw59j\") on node \"crc\" DevicePath \"\"" Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.526258 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac50dc40-cbee-4c97-8e50-3e1380cee118-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.924809 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" event={"ID":"ac50dc40-cbee-4c97-8e50-3e1380cee118","Type":"ContainerDied","Data":"d70e837a5e7ed79d748776873e3c5876aab75f88678bdc7a769b6b9df8afd264"} Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.924860 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d70e837a5e7ed79d748776873e3c5876aab75f88678bdc7a769b6b9df8afd264" Feb 04 12:45:04 crc kubenswrapper[4728]: I0204 12:45:04.924879 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29503485-8hxnc" Feb 04 12:45:05 crc kubenswrapper[4728]: I0204 12:45:05.342350 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw"] Feb 04 12:45:05 crc kubenswrapper[4728]: I0204 12:45:05.351659 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29503440-m65nw"] Feb 04 12:45:05 crc kubenswrapper[4728]: I0204 12:45:05.567203 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6273ef3-9992-44a4-8447-29d22c90fab9" path="/var/lib/kubelet/pods/a6273ef3-9992-44a4-8447-29d22c90fab9/volumes" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.315304 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xvm4z"] Feb 04 12:45:19 crc kubenswrapper[4728]: E0204 12:45:19.316577 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac50dc40-cbee-4c97-8e50-3e1380cee118" containerName="collect-profiles" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.316593 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac50dc40-cbee-4c97-8e50-3e1380cee118" containerName="collect-profiles" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.316831 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac50dc40-cbee-4c97-8e50-3e1380cee118" containerName="collect-profiles" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.318406 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.334386 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvm4z"] Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.431270 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkh46\" (UniqueName: \"kubernetes.io/projected/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-kube-api-access-tkh46\") pod \"redhat-marketplace-xvm4z\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.431472 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-utilities\") pod \"redhat-marketplace-xvm4z\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.431909 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-catalog-content\") pod \"redhat-marketplace-xvm4z\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.533668 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkh46\" (UniqueName: \"kubernetes.io/projected/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-kube-api-access-tkh46\") pod \"redhat-marketplace-xvm4z\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.533797 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-utilities\") pod \"redhat-marketplace-xvm4z\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.533925 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-catalog-content\") pod \"redhat-marketplace-xvm4z\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.534571 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-utilities\") pod \"redhat-marketplace-xvm4z\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.534576 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-catalog-content\") pod \"redhat-marketplace-xvm4z\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.559675 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkh46\" (UniqueName: \"kubernetes.io/projected/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-kube-api-access-tkh46\") pod \"redhat-marketplace-xvm4z\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:19 crc kubenswrapper[4728]: I0204 12:45:19.641649 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:20 crc kubenswrapper[4728]: I0204 12:45:20.203872 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvm4z"] Feb 04 12:45:21 crc kubenswrapper[4728]: I0204 12:45:21.086992 4728 generic.go:334] "Generic (PLEG): container finished" podID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" containerID="312eb7e2337e2bf45148b08130f4cf79bf8d46541af996841d4de884bd5da59e" exitCode=0 Feb 04 12:45:21 crc kubenswrapper[4728]: I0204 12:45:21.088790 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvm4z" event={"ID":"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9","Type":"ContainerDied","Data":"312eb7e2337e2bf45148b08130f4cf79bf8d46541af996841d4de884bd5da59e"} Feb 04 12:45:21 crc kubenswrapper[4728]: I0204 12:45:21.089616 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvm4z" event={"ID":"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9","Type":"ContainerStarted","Data":"12668448de6ad30edbc364ea6d0c524b985d6be32cab5681f750e384b708f224"} Feb 04 12:45:22 crc kubenswrapper[4728]: I0204 12:45:22.103322 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvm4z" event={"ID":"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9","Type":"ContainerStarted","Data":"eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385"} Feb 04 12:45:24 crc kubenswrapper[4728]: I0204 12:45:24.124398 4728 generic.go:334] "Generic (PLEG): container finished" podID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" containerID="eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385" exitCode=0 Feb 04 12:45:24 crc kubenswrapper[4728]: I0204 12:45:24.124481 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvm4z" event={"ID":"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9","Type":"ContainerDied","Data":"eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385"} Feb 04 12:45:25 crc kubenswrapper[4728]: I0204 12:45:25.139524 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvm4z" event={"ID":"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9","Type":"ContainerStarted","Data":"745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2"} Feb 04 12:45:29 crc kubenswrapper[4728]: I0204 12:45:29.642820 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:29 crc kubenswrapper[4728]: I0204 12:45:29.643447 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:29 crc kubenswrapper[4728]: I0204 12:45:29.697001 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:29 crc kubenswrapper[4728]: I0204 12:45:29.720599 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xvm4z" podStartSLOduration=7.288127543 podStartE2EDuration="10.720577371s" podCreationTimestamp="2026-02-04 12:45:19 +0000 UTC" firstStartedPulling="2026-02-04 12:45:21.100636615 +0000 UTC m=+4670.243341000" lastFinishedPulling="2026-02-04 12:45:24.533086443 +0000 UTC m=+4673.675790828" observedRunningTime="2026-02-04 12:45:25.166153426 +0000 UTC m=+4674.308857821" watchObservedRunningTime="2026-02-04 12:45:29.720577371 +0000 UTC m=+4678.863281756" Feb 04 12:45:30 crc kubenswrapper[4728]: I0204 12:45:30.242862 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:30 crc kubenswrapper[4728]: I0204 12:45:30.294977 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvm4z"] Feb 04 12:45:32 crc kubenswrapper[4728]: I0204 12:45:32.207607 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xvm4z" podUID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" containerName="registry-server" containerID="cri-o://745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2" gracePeriod=2 Feb 04 12:45:32 crc kubenswrapper[4728]: I0204 12:45:32.690963 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:32 crc kubenswrapper[4728]: I0204 12:45:32.839274 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-utilities\") pod \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " Feb 04 12:45:32 crc kubenswrapper[4728]: I0204 12:45:32.839463 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-catalog-content\") pod \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " Feb 04 12:45:32 crc kubenswrapper[4728]: I0204 12:45:32.839516 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkh46\" (UniqueName: \"kubernetes.io/projected/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-kube-api-access-tkh46\") pod \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\" (UID: \"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9\") " Feb 04 12:45:32 crc kubenswrapper[4728]: I0204 12:45:32.839589 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-utilities" (OuterVolumeSpecName: "utilities") pod "1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" (UID: "1b7a45b1-b5ba-40ea-a8d2-41e840a616b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:45:32 crc kubenswrapper[4728]: I0204 12:45:32.841634 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:45:32 crc kubenswrapper[4728]: I0204 12:45:32.845245 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-kube-api-access-tkh46" (OuterVolumeSpecName: "kube-api-access-tkh46") pod "1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" (UID: "1b7a45b1-b5ba-40ea-a8d2-41e840a616b9"). InnerVolumeSpecName "kube-api-access-tkh46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:45:32 crc kubenswrapper[4728]: I0204 12:45:32.943287 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkh46\" (UniqueName: \"kubernetes.io/projected/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-kube-api-access-tkh46\") on node \"crc\" DevicePath \"\"" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.219844 4728 generic.go:334] "Generic (PLEG): container finished" podID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" containerID="745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2" exitCode=0 Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.219909 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvm4z" event={"ID":"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9","Type":"ContainerDied","Data":"745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2"} Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.219972 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvm4z" event={"ID":"1b7a45b1-b5ba-40ea-a8d2-41e840a616b9","Type":"ContainerDied","Data":"12668448de6ad30edbc364ea6d0c524b985d6be32cab5681f750e384b708f224"} Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.219994 4728 scope.go:117] "RemoveContainer" containerID="745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.219929 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvm4z" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.240870 4728 scope.go:117] "RemoveContainer" containerID="eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.264213 4728 scope.go:117] "RemoveContainer" containerID="312eb7e2337e2bf45148b08130f4cf79bf8d46541af996841d4de884bd5da59e" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.318542 4728 scope.go:117] "RemoveContainer" containerID="745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2" Feb 04 12:45:33 crc kubenswrapper[4728]: E0204 12:45:33.319077 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2\": container with ID starting with 745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2 not found: ID does not exist" containerID="745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.319129 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2"} err="failed to get container status \"745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2\": rpc error: code = NotFound desc = could not find container \"745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2\": container with ID starting with 745468c40bfa915cfbd59fa78a827fd810fa07e9760dfd693f91a7802c44d1f2 not found: ID does not exist" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.319163 4728 scope.go:117] "RemoveContainer" containerID="eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385" Feb 04 12:45:33 crc kubenswrapper[4728]: E0204 12:45:33.319464 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385\": container with ID starting with eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385 not found: ID does not exist" containerID="eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.319495 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385"} err="failed to get container status \"eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385\": rpc error: code = NotFound desc = could not find container \"eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385\": container with ID starting with eabec2997426eba7f66efcfe803cc3a7d94fc1cc9e404d744bc7cfbbc67d8385 not found: ID does not exist" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.319510 4728 scope.go:117] "RemoveContainer" containerID="312eb7e2337e2bf45148b08130f4cf79bf8d46541af996841d4de884bd5da59e" Feb 04 12:45:33 crc kubenswrapper[4728]: E0204 12:45:33.320332 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"312eb7e2337e2bf45148b08130f4cf79bf8d46541af996841d4de884bd5da59e\": container with ID starting with 312eb7e2337e2bf45148b08130f4cf79bf8d46541af996841d4de884bd5da59e not found: ID does not exist" containerID="312eb7e2337e2bf45148b08130f4cf79bf8d46541af996841d4de884bd5da59e" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.320483 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"312eb7e2337e2bf45148b08130f4cf79bf8d46541af996841d4de884bd5da59e"} err="failed to get container status \"312eb7e2337e2bf45148b08130f4cf79bf8d46541af996841d4de884bd5da59e\": rpc error: code = NotFound desc = could not find container \"312eb7e2337e2bf45148b08130f4cf79bf8d46541af996841d4de884bd5da59e\": container with ID starting with 312eb7e2337e2bf45148b08130f4cf79bf8d46541af996841d4de884bd5da59e not found: ID does not exist" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.345622 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" (UID: "1b7a45b1-b5ba-40ea-a8d2-41e840a616b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.352931 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.568536 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvm4z"] Feb 04 12:45:33 crc kubenswrapper[4728]: I0204 12:45:33.575400 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvm4z"] Feb 04 12:45:35 crc kubenswrapper[4728]: I0204 12:45:35.575629 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" path="/var/lib/kubelet/pods/1b7a45b1-b5ba-40ea-a8d2-41e840a616b9/volumes" Feb 04 12:45:43 crc kubenswrapper[4728]: I0204 12:45:43.759219 4728 scope.go:117] "RemoveContainer" containerID="ae43b682eaaa7e03816090627cb9b1466cf41dee4775f4c0e6e5f4d0ba4e5b75" Feb 04 12:45:43 crc kubenswrapper[4728]: I0204 12:45:43.976699 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jt7w9"] Feb 04 12:45:43 crc kubenswrapper[4728]: E0204 12:45:43.977603 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" containerName="extract-content" Feb 04 12:45:43 crc kubenswrapper[4728]: I0204 12:45:43.977640 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" containerName="extract-content" Feb 04 12:45:43 crc kubenswrapper[4728]: E0204 12:45:43.977678 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" containerName="extract-utilities" Feb 04 12:45:43 crc kubenswrapper[4728]: I0204 12:45:43.977690 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" containerName="extract-utilities" Feb 04 12:45:43 crc kubenswrapper[4728]: E0204 12:45:43.977714 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" containerName="registry-server" Feb 04 12:45:43 crc kubenswrapper[4728]: I0204 12:45:43.977722 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" containerName="registry-server" Feb 04 12:45:43 crc kubenswrapper[4728]: I0204 12:45:43.978057 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7a45b1-b5ba-40ea-a8d2-41e840a616b9" containerName="registry-server" Feb 04 12:45:43 crc kubenswrapper[4728]: I0204 12:45:43.980641 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:43 crc kubenswrapper[4728]: I0204 12:45:43.999500 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jt7w9"] Feb 04 12:45:44 crc kubenswrapper[4728]: I0204 12:45:44.083100 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrm6c\" (UniqueName: \"kubernetes.io/projected/4d0d2881-edc5-46b5-b77c-30d9ad31b990-kube-api-access-lrm6c\") pod \"community-operators-jt7w9\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:44 crc kubenswrapper[4728]: I0204 12:45:44.083225 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-catalog-content\") pod \"community-operators-jt7w9\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:44 crc kubenswrapper[4728]: I0204 12:45:44.083562 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-utilities\") pod \"community-operators-jt7w9\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:44 crc kubenswrapper[4728]: I0204 12:45:44.185814 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrm6c\" (UniqueName: \"kubernetes.io/projected/4d0d2881-edc5-46b5-b77c-30d9ad31b990-kube-api-access-lrm6c\") pod \"community-operators-jt7w9\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:44 crc kubenswrapper[4728]: I0204 12:45:44.185960 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-catalog-content\") pod \"community-operators-jt7w9\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:44 crc kubenswrapper[4728]: I0204 12:45:44.186123 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-utilities\") pod \"community-operators-jt7w9\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:44 crc kubenswrapper[4728]: I0204 12:45:44.186582 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-catalog-content\") pod \"community-operators-jt7w9\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:44 crc kubenswrapper[4728]: I0204 12:45:44.186597 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-utilities\") pod \"community-operators-jt7w9\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:44 crc kubenswrapper[4728]: I0204 12:45:44.204797 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrm6c\" (UniqueName: \"kubernetes.io/projected/4d0d2881-edc5-46b5-b77c-30d9ad31b990-kube-api-access-lrm6c\") pod \"community-operators-jt7w9\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:44 crc kubenswrapper[4728]: I0204 12:45:44.304103 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:44 crc kubenswrapper[4728]: I0204 12:45:44.795873 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jt7w9"] Feb 04 12:45:45 crc kubenswrapper[4728]: I0204 12:45:45.359055 4728 generic.go:334] "Generic (PLEG): container finished" podID="4d0d2881-edc5-46b5-b77c-30d9ad31b990" containerID="fd3202c36c9eb2fb209854ae5e43bca2e49ce02b2abec0335132c6393135b1fb" exitCode=0 Feb 04 12:45:45 crc kubenswrapper[4728]: I0204 12:45:45.359157 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7w9" event={"ID":"4d0d2881-edc5-46b5-b77c-30d9ad31b990","Type":"ContainerDied","Data":"fd3202c36c9eb2fb209854ae5e43bca2e49ce02b2abec0335132c6393135b1fb"} Feb 04 12:45:45 crc kubenswrapper[4728]: I0204 12:45:45.359399 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7w9" event={"ID":"4d0d2881-edc5-46b5-b77c-30d9ad31b990","Type":"ContainerStarted","Data":"81f7404855c3002eca873593bab10815f802828233e8800f55d33e824c84fcdf"} Feb 04 12:45:45 crc kubenswrapper[4728]: I0204 12:45:45.362179 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 04 12:45:46 crc kubenswrapper[4728]: I0204 12:45:46.370527 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7w9" event={"ID":"4d0d2881-edc5-46b5-b77c-30d9ad31b990","Type":"ContainerStarted","Data":"18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b"} Feb 04 12:45:48 crc kubenswrapper[4728]: I0204 12:45:48.393350 4728 generic.go:334] "Generic (PLEG): container finished" podID="4d0d2881-edc5-46b5-b77c-30d9ad31b990" containerID="18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b" exitCode=0 Feb 04 12:45:48 crc kubenswrapper[4728]: I0204 12:45:48.393406 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7w9" event={"ID":"4d0d2881-edc5-46b5-b77c-30d9ad31b990","Type":"ContainerDied","Data":"18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b"} Feb 04 12:45:49 crc kubenswrapper[4728]: I0204 12:45:49.404341 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7w9" event={"ID":"4d0d2881-edc5-46b5-b77c-30d9ad31b990","Type":"ContainerStarted","Data":"30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3"} Feb 04 12:45:49 crc kubenswrapper[4728]: I0204 12:45:49.427723 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jt7w9" podStartSLOduration=2.909517548 podStartE2EDuration="6.427705695s" podCreationTimestamp="2026-02-04 12:45:43 +0000 UTC" firstStartedPulling="2026-02-04 12:45:45.361813185 +0000 UTC m=+4694.504517560" lastFinishedPulling="2026-02-04 12:45:48.880001322 +0000 UTC m=+4698.022705707" observedRunningTime="2026-02-04 12:45:49.421951464 +0000 UTC m=+4698.564655849" watchObservedRunningTime="2026-02-04 12:45:49.427705695 +0000 UTC m=+4698.570410070" Feb 04 12:45:54 crc kubenswrapper[4728]: I0204 12:45:54.305451 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:54 crc kubenswrapper[4728]: I0204 12:45:54.306046 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:54 crc kubenswrapper[4728]: I0204 12:45:54.352825 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:54 crc kubenswrapper[4728]: I0204 12:45:54.486205 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:54 crc kubenswrapper[4728]: I0204 12:45:54.596150 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jt7w9"] Feb 04 12:45:56 crc kubenswrapper[4728]: I0204 12:45:56.478608 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jt7w9" podUID="4d0d2881-edc5-46b5-b77c-30d9ad31b990" containerName="registry-server" containerID="cri-o://30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3" gracePeriod=2 Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.068562 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.245461 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-utilities\") pod \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.245596 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-catalog-content\") pod \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.245827 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrm6c\" (UniqueName: \"kubernetes.io/projected/4d0d2881-edc5-46b5-b77c-30d9ad31b990-kube-api-access-lrm6c\") pod \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\" (UID: \"4d0d2881-edc5-46b5-b77c-30d9ad31b990\") " Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.246448 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-utilities" (OuterVolumeSpecName: "utilities") pod "4d0d2881-edc5-46b5-b77c-30d9ad31b990" (UID: "4d0d2881-edc5-46b5-b77c-30d9ad31b990"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.251792 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0d2881-edc5-46b5-b77c-30d9ad31b990-kube-api-access-lrm6c" (OuterVolumeSpecName: "kube-api-access-lrm6c") pod "4d0d2881-edc5-46b5-b77c-30d9ad31b990" (UID: "4d0d2881-edc5-46b5-b77c-30d9ad31b990"). InnerVolumeSpecName "kube-api-access-lrm6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.293985 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d0d2881-edc5-46b5-b77c-30d9ad31b990" (UID: "4d0d2881-edc5-46b5-b77c-30d9ad31b990"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.347844 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.347896 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrm6c\" (UniqueName: \"kubernetes.io/projected/4d0d2881-edc5-46b5-b77c-30d9ad31b990-kube-api-access-lrm6c\") on node \"crc\" DevicePath \"\"" Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.347911 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0d2881-edc5-46b5-b77c-30d9ad31b990-utilities\") on node \"crc\" DevicePath \"\"" Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.488779 4728 generic.go:334] "Generic (PLEG): container finished" podID="4d0d2881-edc5-46b5-b77c-30d9ad31b990" containerID="30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3" exitCode=0 Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.488831 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt7w9" Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.488838 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7w9" event={"ID":"4d0d2881-edc5-46b5-b77c-30d9ad31b990","Type":"ContainerDied","Data":"30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3"} Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.488944 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7w9" event={"ID":"4d0d2881-edc5-46b5-b77c-30d9ad31b990","Type":"ContainerDied","Data":"81f7404855c3002eca873593bab10815f802828233e8800f55d33e824c84fcdf"} Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.488964 4728 scope.go:117] "RemoveContainer" containerID="30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3" Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.528655 4728 scope.go:117] "RemoveContainer" containerID="18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b" Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.536706 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jt7w9"] Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.550198 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jt7w9"] Feb 04 12:45:57 crc kubenswrapper[4728]: I0204 12:45:57.565289 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d0d2881-edc5-46b5-b77c-30d9ad31b990" path="/var/lib/kubelet/pods/4d0d2881-edc5-46b5-b77c-30d9ad31b990/volumes" Feb 04 12:45:58 crc kubenswrapper[4728]: I0204 12:45:58.073010 4728 scope.go:117] "RemoveContainer" containerID="fd3202c36c9eb2fb209854ae5e43bca2e49ce02b2abec0335132c6393135b1fb" Feb 04 12:45:58 crc kubenswrapper[4728]: I0204 12:45:58.214916 4728 scope.go:117] "RemoveContainer" containerID="30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3" Feb 04 12:45:58 crc kubenswrapper[4728]: E0204 12:45:58.215448 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3\": container with ID starting with 30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3 not found: ID does not exist" containerID="30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3" Feb 04 12:45:58 crc kubenswrapper[4728]: I0204 12:45:58.215486 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3"} err="failed to get container status \"30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3\": rpc error: code = NotFound desc = could not find container \"30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3\": container with ID starting with 30b7b080e80971af75e98fb27955c4153af8ffafffe9fe7dab2f150add0ef4d3 not found: ID does not exist" Feb 04 12:45:58 crc kubenswrapper[4728]: I0204 12:45:58.215526 4728 scope.go:117] "RemoveContainer" containerID="18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b" Feb 04 12:45:58 crc kubenswrapper[4728]: E0204 12:45:58.216028 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b\": container with ID starting with 18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b not found: ID does not exist" containerID="18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b" Feb 04 12:45:58 crc kubenswrapper[4728]: I0204 12:45:58.216076 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b"} err="failed to get container status \"18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b\": rpc error: code = NotFound desc = could not find container \"18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b\": container with ID starting with 18ce21f9792fcf91d408c788dfdd579f053c629a5400e4038b435685dcb66d0b not found: ID does not exist" Feb 04 12:45:58 crc kubenswrapper[4728]: I0204 12:45:58.216107 4728 scope.go:117] "RemoveContainer" containerID="fd3202c36c9eb2fb209854ae5e43bca2e49ce02b2abec0335132c6393135b1fb" Feb 04 12:45:58 crc kubenswrapper[4728]: E0204 12:45:58.216899 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd3202c36c9eb2fb209854ae5e43bca2e49ce02b2abec0335132c6393135b1fb\": container with ID starting with fd3202c36c9eb2fb209854ae5e43bca2e49ce02b2abec0335132c6393135b1fb not found: ID does not exist" containerID="fd3202c36c9eb2fb209854ae5e43bca2e49ce02b2abec0335132c6393135b1fb" Feb 04 12:45:58 crc kubenswrapper[4728]: I0204 12:45:58.216926 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd3202c36c9eb2fb209854ae5e43bca2e49ce02b2abec0335132c6393135b1fb"} err="failed to get container status \"fd3202c36c9eb2fb209854ae5e43bca2e49ce02b2abec0335132c6393135b1fb\": rpc error: code = NotFound desc = could not find container \"fd3202c36c9eb2fb209854ae5e43bca2e49ce02b2abec0335132c6393135b1fb\": container with ID starting with fd3202c36c9eb2fb209854ae5e43bca2e49ce02b2abec0335132c6393135b1fb not found: ID does not exist" Feb 04 12:46:35 crc kubenswrapper[4728]: I0204 12:46:35.449102 4728 patch_prober.go:28] interesting pod/machine-config-daemon-grzvj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 04 12:46:35 crc kubenswrapper[4728]: I0204 12:46:35.450081 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grzvj" podUID="3c8409df-def9-46a0-a813-6788ddf1e292" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515140637675024463 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015140637676017401 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015140626063016510 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015140626063015460 5ustar corecore